• Title/Summary/Keyword: column-oriented database

Search Result 17, Processing Time 0.024 seconds

A Study on the Model Development for Production of Interior Drawings and estimation of Quantities (실내도면 작성과 물량산출을 위한 모델 개발에 관한 연구)

  • 정례화;이승우;추승연
    • Korean Institute of Interior Design Journal
    • /
    • no.19
    • /
    • pp.30-37
    • /
    • 1999
  • This study presents a methods on the construction of integrated system for the purpose of automation of design plans, calculation of quantity of materials and estimation by abstracting information on building materials which is produced on the course of three dimension modeling by using computer. Therefore, an object oriented methodology is introduced to compose design informations in three dimension, space for unifying building informations, and expressing properties of building factors and materials, and to construct a database for computers to recognize architecture informations. An object indicates a conceptual individual existing in real world or existence of individual and necessity in composing a building could be called as objects such as column, wall, beam, slab, door and window and these contain materiality and immateriality. It is systemized to which properties of these building's objects are installed by the user of computer and by API(Application Programming Interface), chosen informations automatically converse to each unit work such as design plan structure plan, calculation of amount of materials, etc.

  • PDF

Processing of Sensor Network Data using Column-Oriented Database (세로-지향 데이터베이스를 이용한 센서 네트워크 데이터 처리)

  • Oh, Byung-Jung;Kim, Kyung-Ho;Kim, Jae-Kyung;Kim, Kyung-Chang
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2012.06c
    • /
    • pp.66-68
    • /
    • 2012
  • 센서 네트워크에서는 센서들의 배터리의 교체나 충전이 어렵기 때문에 배터리의 수명을 최대한으로 연장하는 것이 중요하다. 본 논문에서는 센서 네트워크에서 세로-지향 데이터베이스 시스템을 적용하여 통신비용을 절감하기 위한 전략을 소개한다. 세로-지향 데이터베이스 시스템의 적용은 데이터를 컬럼으로 저장하기 때문에 질의에 해당하는 컬럼만을 불러와 메시지의 길이를 줄여 통신비용의 감소 효과를 가져와 센서들의 배터리 수명을 연장하는 효과를 가져 온다. 아울러 기존의 가로-지향 데이터베이스 시스템의 처리 방식과 비교하여 어떠한 차이점이 있는지 기술하였다.

Comparison of Storage Structures for RDF Data in Semantic Web. (시맨틱 웹에서 RDF 데이터 저장구조들의 성능비교)

  • Kim, KyungHo;Back, WooHyoun;Son, JiEun;Kim, KyungChang
    • Annual Conference of KIPS
    • /
    • 2013.05a
    • /
    • pp.881-884
    • /
    • 2013
  • RDF(Resource Description Framework)는 시맨틱 웹의 기초로서 웹 사용자에게 정보를 보다 정확하고 효율적으로 접근하는 표준이다. RDF 데이터를 효율적으로 저장하고 접근하는 필요성이 날로 증가하고 있다. RDF 데이터를 저장하고 검색하는 기본 저장 구조는 관계형 데이터베이스를 이용하는 것이다. 최근에는 RDF 데이터가 엄청나게 증가하고 있는 시점에 대용량 database의 질의(단순 조회)에 최적화된 칼럼-지향(column-oriented) 데이터베이스가 대안으로 제안되었다. 본 논문에서는 RDF 데이터의 저장 구조로서 관계형 데이터베이스와 칼럼-기반 데이터베이스를 비교분석 하고자 한다. Berlin SPARQL Benchmark 를 이용한 성능분석 결과 RDF data 의 저장 구조로서 칼럼-기반 데이터베이스의 효율성을 입증하였다.

System Development for Analysis and Compensation of Column Shortening of Reinforced Concrete Tell Buildings (철근콘크리트 고층건물 기둥의 부등축소량 해석 및 보정을 위한 시스템 개발)

  • 김선영;김진근;김원중
    • Journal of the Korea Concrete Institute
    • /
    • v.14 no.3
    • /
    • pp.291-298
    • /
    • 2002
  • Recently, construction of reinforced concrete tall buildings is widely increased according to the improvement of material quality and design technology. Therefore, differential shortenings of columns due to elastic, creep, and shrinkage have been an important issue. But it has been neglected to predict the Inelastic behavior of RC structures even though those deformations make a serious problem on the partition wall, external cladding, duct, etc. In this paper, analysis system for prediction and compensation of the differential column shortenings considering time-dependent deformations and construction sequence is developed using the objected-oriented technique. Developed analysis system considers the construction sequence, especially time-dependent deformation in early days, and is composed of input module, database module, database store module, analysis module, and analysis result generation module. Graphic user interface(GUI) is supported for user's convenience. After performing the analysis, the output results like deflections and member forces according to the time can be observed in the generation module using the graphic diagram, table, and chart supported by the integrated environment.

Cloud-based Intelligent Management System for Photovoltaic Power Plants (클라우드 기반 태양광 발전단지 통합 관리 시스템)

  • Park, Kyoung-Wook;Ban, Kyeong-Jin;Song, Seung-Heon;Kim, Eung-Kon
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.7 no.3
    • /
    • pp.591-596
    • /
    • 2012
  • Recently, the efficient management system for photovoltaic power plants has been required due to the continuously increasing construction of photovoltaic power plants. In this paper, we propose a cloud-based intelligent management system for many photovoltaic power plants. The proposed system stores the measured data of power plants using Hadoop HBase which is a column-oriented database, and processes the calculations of performance, efficiency, and prediction the amount of power generation by parallel processing based on Map-Reduce model. And, Web-based data visualization module allows the administrator to provide information in various forms.

Design and Implementation of Cloud-based Sensor Data Management System (클라우드 기반 센서 데이터 관리 시스템 설계 및 구현)

  • Park, Kyoung-Wook;Kim, Kyong-Og;Ban, Kyeong-Jin;Kim, Eung-Kon
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.5 no.6
    • /
    • pp.672-677
    • /
    • 2010
  • Recently, the efficient management system for large-scale sensor data has been required due to the increasing deployment of large-scale sensor networks. In this paper, we propose a cloud-based sensor data management system with low cast, high scalability, and efficiency. Sensor data in sensor networks are transmitted to the cloud through a cloud-gateway. At this point, outlier detection and event processing is performed. Transmitted sensor data are stored in the Hadoop HBase, distributed column-oriented database, and processed in parallel by query processing module designed as the MapReduce model. The proposed system can be work with the application of a variety of platforms, because processed results are provided through REST-based web service.

Design and Implementation of MongoDB-based Unstructured Log Processing System over Cloud Computing Environment (클라우드 환경에서 MongoDB 기반의 비정형 로그 처리 시스템 설계 및 구현)

  • Kim, Myoungjin;Han, Seungho;Cui, Yun;Lee, Hanku
    • Journal of Internet Computing and Services
    • /
    • v.14 no.6
    • /
    • pp.71-84
    • /
    • 2013
  • Log data, which record the multitude of information created when operating computer systems, are utilized in many processes, from carrying out computer system inspection and process optimization to providing customized user optimization. In this paper, we propose a MongoDB-based unstructured log processing system in a cloud environment for processing the massive amount of log data of banks. Most of the log data generated during banking operations come from handling a client's business. Therefore, in order to gather, store, categorize, and analyze the log data generated while processing the client's business, a separate log data processing system needs to be established. However, the realization of flexible storage expansion functions for processing a massive amount of unstructured log data and executing a considerable number of functions to categorize and analyze the stored unstructured log data is difficult in existing computer environments. Thus, in this study, we use cloud computing technology to realize a cloud-based log data processing system for processing unstructured log data that are difficult to process using the existing computing infrastructure's analysis tools and management system. The proposed system uses the IaaS (Infrastructure as a Service) cloud environment to provide a flexible expansion of computing resources and includes the ability to flexibly expand resources such as storage space and memory under conditions such as extended storage or rapid increase in log data. Moreover, to overcome the processing limits of the existing analysis tool when a real-time analysis of the aggregated unstructured log data is required, the proposed system includes a Hadoop-based analysis module for quick and reliable parallel-distributed processing of the massive amount of log data. Furthermore, because the HDFS (Hadoop Distributed File System) stores data by generating copies of the block units of the aggregated log data, the proposed system offers automatic restore functions for the system to continually operate after it recovers from a malfunction. Finally, by establishing a distributed database using the NoSQL-based Mongo DB, the proposed system provides methods of effectively processing unstructured log data. Relational databases such as the MySQL databases have complex schemas that are inappropriate for processing unstructured log data. Further, strict schemas like those of relational databases cannot expand nodes in the case wherein the stored data are distributed to various nodes when the amount of data rapidly increases. NoSQL does not provide the complex computations that relational databases may provide but can easily expand the database through node dispersion when the amount of data increases rapidly; it is a non-relational database with an appropriate structure for processing unstructured data. The data models of the NoSQL are usually classified as Key-Value, column-oriented, and document-oriented types. Of these, the representative document-oriented data model, MongoDB, which has a free schema structure, is used in the proposed system. MongoDB is introduced to the proposed system because it makes it easy to process unstructured log data through a flexible schema structure, facilitates flexible node expansion when the amount of data is rapidly increasing, and provides an Auto-Sharding function that automatically expands storage. The proposed system is composed of a log collector module, a log graph generator module, a MongoDB module, a Hadoop-based analysis module, and a MySQL module. When the log data generated over the entire client business process of each bank are sent to the cloud server, the log collector module collects and classifies data according to the type of log data and distributes it to the MongoDB module and the MySQL module. The log graph generator module generates the results of the log analysis of the MongoDB module, Hadoop-based analysis module, and the MySQL module per analysis time and type of the aggregated log data, and provides them to the user through a web interface. Log data that require a real-time log data analysis are stored in the MySQL module and provided real-time by the log graph generator module. The aggregated log data per unit time are stored in the MongoDB module and plotted in a graph according to the user's various analysis conditions. The aggregated log data in the MongoDB module are parallel-distributed and processed by the Hadoop-based analysis module. A comparative evaluation is carried out against a log data processing system that uses only MySQL for inserting log data and estimating query performance; this evaluation proves the proposed system's superiority. Moreover, an optimal chunk size is confirmed through the log data insert performance evaluation of MongoDB for various chunk sizes.