• Title/Summary/Keyword: 데이터베이스 압축

Search Result 161, Processing Time 0.031 seconds

A Method for Distributed Database Processing with Optimized Communication Cost in Dataflow model (데이터플로우 모델에서 통신비용 최적화를 이용한 분산 데이터베이스 처리 방법)

  • Jun, Byung-Uk
    • Journal of Internet Computing and Services
    • /
    • v.8 no.1
    • /
    • pp.133-142
    • /
    • 2007
  • Large database processing is one of the most important technique in the information society, Since most large database is regionally distributed, the distributed database processing has been brought into relief. Communications and data compressions are the basic technologies for large database processing. In order to maximize those technologies, the execution time for the task, the size of data, and communication time between processors should be considered. In this paper, the dataflow scheme and vertically layered allocation algorithm have been used to optimize the distributed large database processing. The basic concept of this method is rearrangement of processes considering the communication time between processors. The paper also introduces measurement model of the execution time, the size of output data, and the communication time in order to implement the proposed scheme.

  • PDF

Cross Compressed Replication Scheme for Large-Volume Column Storages (대용량 컬럼 저장소를 위한 교차 압축 이중화 기법)

  • Byun, Siwoo
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.14 no.5
    • /
    • pp.2449-2456
    • /
    • 2013
  • The column-oriented database storage is a very advanced model for large-volume data analysis systems because of its superior I/O performance. Traditional data storages exploit row-oriented storage where the attributes of a record are placed contiguously in hard disk for fast write operations. However, for search-mostly datawarehouse systems, column-oriented storage has become a more proper model because of its superior read performance. Recently, solid state drive using MLC flash memory is largely recognized as the preferred storage media for high-speed data analysis systems. In this paper, we introduce fast column-oriented data storage model and then propose a new storage management scheme using a cross compressed replication for the high-speed column-oriented datawarehouse system. Our storage management scheme which is based on two MLC SSD achieves superior performance and reliability by the cross replication of the uncompressed segment and the compressed segment under high workloads of CPU and I/O. Based on the results of the performance evaluation, we conclude that our storage management scheme outperforms the traditional scheme in the respect of update throughput and response time of the column segments.

Design of Supplementary Cementitious Materials and Unit Content of Binder for Reducing CO2 Emission of Concrete (콘크리트 CO2 저감을 고려한 혼화재 및 단위 결합재 양의 설계)

  • Yang, Keun-Hyeok;Moon, Jae-Heum
    • Journal of the Korea Concrete Institute
    • /
    • v.24 no.5
    • /
    • pp.597-604
    • /
    • 2012
  • The present study assessed the $CO_2$ emissions of concrete according to the type and replacement ratio of supplementary cementitious materials (SCM) and concrete compressive strength using a comprehensive database including 2464 cement concrete specimens and 776 cement concrete mixes with different SCMs. The system studied in $CO_2$ assessment of concrete based on Korean lifecycle inventory was from cradle to pre-construction, which includes consistent materials, transportation and production phases. As the performance efficiency indicators, binder and $CO_2$ intensities were analyzed, and simple equations to evaluate the amount of $CO_2$ emission of concrete were then formulated as a function of concrete compressive strength and the replacement ratio of each SCM. Hence, the proposed equations are expected to be practical and useful as a guideline to determine the type and replacement ratio of SCM and unit content of binder in concrete mix design that can satisfy the target compressive strength and $CO_2$ reduction percentage relative to cement concrete.

A Service Strategy of Binary Document Images based on JBIG in Digital Library (전자도서관에서의 JBIG 기반 이전 문서영상 서비스 방안)

  • 한영미;김민환
    • Journal of Korea Multimedia Society
    • /
    • v.1 no.1
    • /
    • pp.37-44
    • /
    • 1998
  • While the SGML(standard generalized markup language) tend to be used in multimedia document management systems, still binary document images are widely used in servicing the information of printed documents at digital libraries. But the printed documents are scanned in 200 dpi resolution and the scanned binary document images are compressed by the ITU-T T.6 method, so they have difficulties in representing them in good quality and compressing them very efficiently. In this paper, by considering quality of the binary document images and expandability and effectiveness of database of them, we show that the suitable scanning resolution of them is 600 dpi and the best compression method is the JBIG. A staged service strategy of them is also suggested to solve the difficulty caused from long decompression time of the JBIG by analyzing characteristics of retrieving the binary document images in monitor and printer. In experiments for several typical binary document images, high compression rate of the JBIG and effectiveness of the staged service strategy are verified.

  • PDF

A Compressed Hot-Cold Clustering to Improve Index Operation Performance of Flash Memory-SSD Systems (플래시메모리-SSD의 인덱스 연산 성능 향상을 위한 압축된 핫-콜드 클러스터링 기법)

  • Byun, Si-Woo
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.11 no.1
    • /
    • pp.166-174
    • /
    • 2010
  • SSDs are one of the best media to support portable and desktop computers' storage devices. Their features include non-volatility, low power consumption, and fast access time for read operations, which are sufficient to present flash memories as major database storage components for desktop and server computers. However, we need to improve traditional index management schemes based on B-Tree due to the relatively slow characteristics of flash memory operations, as compared to RAM memory. In order to achieve this goal, we propose a new index management scheme based on a compressed hot-cold clustering called CHC-Tree. CHC-Tree-based index management improves index operation performance by dividing index nodes into hot or cold segments and compressing pointers and keys in the index nodes and clustering the hot or cold segments. The offset compression techniques using unused free area in cold index node lead to reduce the number of slow erase operations in index node insert/delete processes. Simulation results show that our scheme significantly reduces the write and erase operation overheads, improving the index search performance of B-Tree by up to 26 percent, and the index update performance by up to 23 percent.

ECG Signal Compression using Feature Points based on Curvature (곡률을 이용한 특징점 기반 심전도 신호 압축)

  • Kim, Tae-Hun;Kim, Sung-Wan;Ryu, Chun-Ha;Yun, Byoung-Ju;Kim, Jeong-Hong;Choi, Byung-Jae;Park, Kil-Houm
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.20 no.5
    • /
    • pp.624-630
    • /
    • 2010
  • As electrocardiogram(ECG) signals are generally sampled with a frequency of over 200Hz, a method to compress diagnostic information without losing data is required to store and transmit them efficiently. In this paper, an ECG signal compression method, which uses feature points based on curvature, is proposed. The feature points of P, Q, R, S, T waves, which are critical components of the ECG signal, have large curvature values compared to other vertexes. Thus, these vertexes are extracted with the proposed method, which uses local extremum of curvatures. Furthermore, in order to minimize reconstruction errors of the ECG signal, extra vertexes are added according to the iterative vertex selection method. Through the experimental results on the ECG signals from MIT-BIH Arrhythmia database, it is concluded that the vertexes selected by the proposed method preserve all feature points of the ECG signals. In addition, they are more efficient than the AZTEC(Amplitude Zone Time Epoch Coding) method.

Space-Efficient Compressed-Column Management for IoT Collection Servers (IoT 수집 서버를 위한 공간효율적 압축-칼럼 관리)

  • Byun, Siwoo
    • Asia-pacific Journal of Multimedia Services Convergent with Art, Humanities, and Sociology
    • /
    • v.9 no.1
    • /
    • pp.179-187
    • /
    • 2019
  • With the recent development of small computing devices, IoT sensor network can be widely deployed and is now readily available with sensing, calculation and communi-cation functions at low cost. Sensor data management is a major component of the Internet of Things environment. The huge volume of data produced and transmitted from sensing devices can provide a lot of useful information but is often considered the next big data for businesses. New column-wise compression technology is mounted to the large data server because of its superior space efficiency. Since sensor nodes have narrow bandwidth and fault-prone wireless channels, sensor-based storage systems are subject to incomplete data services. In this study, we will bring forth a short overview through providing an analysis on IoT sensor networks, and will propose a new storage management scheme for IoT data. Our management scheme is based on RAID storage model using column-wise segmentation and compression to improve space efficiency without sacrificing I/O performance. We conclude that proposed storage control scheme outperforms the previous RAID control by computer performance simulation.

Design of a Content-based Multimedia Information Retrieval System (내용 기반 멀티미디어 정보 검색 시스템의 설계)

  • 박민식;유기형
    • Journal of the Korea Computer Industry Society
    • /
    • v.2 no.8
    • /
    • pp.1117-1122
    • /
    • 2001
  • Recently, issues on the internet searching of image information through various multimedia databases have drawn an tremendous attention and several researches on image information retrieval methods are on progress. By incorporating wavelet transform and correlation matrixes, we propose a novel and highly efficient feature vector extraction algorithm that has an capability of a robust similarity matching. The simulation results have yielded a faster and highly accurate candidate image retrieval performance in comparison to those of the conventional algorithms. Such an improved performance can be obtained because the used feature vectors were compressed to 256:1 while the correlation matrixes are incorporated to provide a fuel information for the better matching.

  • PDF

Design and Implementation of a Real-time Bio-signal Obtaining, Transmitting, Compressing and Storing System for Telemedicine (원격 진료를 위한 실시간 생체 신호 취득, 전송 및 압축, 저장 시스템의 설계 및 구현)

  • Jung, In-Kyo;Kim, Young-Joon;Park, In-Su;Lee, In-Sung
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.45 no.4
    • /
    • pp.42-50
    • /
    • 2008
  • The real-time bio-signal monitoring system based on the ZigBee and SIP/RTP has proposed and implemented for telemedicine but that has some problems at the stabilities to transmit bio-signal from the sensors to the other sides. In this paper, we designed and implemented a real-time bio-signal monitoring system that is focused on the reliability and efficiency for transmitting bio-signal at real-time. We designed the system to have enhanced architecture and performance in the ubiquitous sensor network, SIP/RTP real-time transmission and management of the database. The Bluetooth network is combined with ZigBee network to distribute traffic of the ECG and the other bio-signal. The modified and multiplied RTP session is used to ensure real-time transmission of ECG, other bio-signals and speech information on the internet. The modified ECG compression method based on DWLT and MSVQ is used to reduce data rate for storing ECG to the database. Finally we implemented a system that has improved performance for transmitting bio-signal from the sensors to the monitoring console and database. This implemented system makes possible to make various applications to serve U-health care services.

Fast Video Detection Using Temporal Similarity Extraction of Successive Spatial Features (연속하는 공간적 특징의 시간적 유사성 검출을 이용한 고속 동영상 검색)

  • Cho, A-Young;Yang, Won-Keun;Cho, Ju-Hee;Lim, Ye-Eun;Jeong, Dong-Seok
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.35 no.11C
    • /
    • pp.929-939
    • /
    • 2010
  • The growth of multimedia technology forces the development of video detection for large database management and illegal copy detection. To meet this demand, this paper proposes a fast video detection method to apply to a large database. The fast video detection algorithm uses spatial features using the gray value distribution from frames and temporal features using the temporal similarity map. We form the video signature using the extracted spatial feature and temporal feature, and carry out a stepwise matching method. The performance was evaluated by accuracy, extraction and matching time, and signature size using the original videos and their modified versions such as brightness change, lossy compression, text/logo overlay. We show empirical parameter selection and the experimental results for the simple matching method using only spatial feature and compare the results with existing algorithms. According to the experimental results, the proposed method has good performance in accuracy, processing time, and signature size. Therefore, the proposed fast detection algorithm is suitable for video detection with the large database.