• Title/Summary/Keyword: SQL DB

Search Result 146, Processing Time 0.023 seconds

A Study on DB Security Problem Improvement of DB Masking by Security Grade (DB 보안의 문제점 개선을 위한 보안등급별 Masking 연구)

  • Baek, Jong-Il;Park, Dea-Woo
    • Journal of the Korea Society of Computer and Information
    • /
    • v.14 no.4
    • /
    • pp.101-109
    • /
    • 2009
  • An encryption module is equipped basically at 8i version ideal of Oracle DBMS, encryption module, but a performance decrease is caused, and users are restrictive. We analyze problem of DB security by technology by circles at this paper whether or not there is an index search, object management disorder, a serious DB performance decrease by encryption, real-time data encryption beauty whether or not there is data approach control beauty circular-based IP. And presentation does the comprehensive security Frame Work which utilized the DB Masking technique that is an alternative means technical encryption in order to improve availability of DB security. We use a virtual account, and set up a DB Masking basis by security grades as alternatives, we check advance user authentication and SQL inquiry approvals and integrity after the fact through virtual accounts, utilize to method as collect by an auditing log that an officer was able to do safely DB.

Modeling and Implementation of Public Open Data in NoSQL Database

  • Min, Meekyung
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.10 no.3
    • /
    • pp.51-58
    • /
    • 2018
  • In order to utilize various data provided by Korea public open data portal, data should be systematically managed using a database. Since the range of open data is enormous, and the amount of data continues to increase, it is preferable to use a database capable of processing big data in order to analyze and utilize the data. This paper proposes data modeling and implementation method suitable for public data. The target data is subway related data provided by the public open data portal. Schema of the public data related to Seoul metro stations are analyzed and problems of the schema are presented. To solve these problems, this paper proposes a method to normalize and structure the subway data and model it in NoSQL database. In addition, the implementation result is shown by using MongDB which is a document-based database capable of processing big data.

Design of GUI for Benchmarking Database Engines Using YCSB (YCSB 기반의 데이터베이스 엔진 벤치마킹 GUI 설계)

  • Choi, Jae-yong;Ham, Seon-jung;Choi, do-jin;Park, soo-bin;Park, song-hee;Baek, yeon-hee;Shin, bo-kyoung;Park, jae-yeol;Lim, jong-tae;Bok, kyoung-soo;Yoo, jae-soo
    • Proceedings of the Korea Contents Association Conference
    • /
    • 2019.05a
    • /
    • pp.459-460
    • /
    • 2019
  • 최근 데이터베이스에서 다루는 데이터의 크기가 커짐에 따라 SQL DB 대신 NoSQL DB의 사용이 증가하고 있다. 이런 변화에 따라 NoSQL과 저장장치에 대한 벤치마킹 및 분석을 통한 저장장치 성능 최적화 및 성능 평가 방법 개선이 필요하다. 본 논문에서는 기존 벤치마킹 툴의 조작 불편함을 해소하기 위해서 사용자의 편의성을 고려한 간편한 벤치마킹 시스템 GUI를 설계한다. 시각화 툴을 활용하여 벤치마킹 결과의 분석을 용이하게 할 수 있는 환경을 제공해준다.

  • PDF

An Efficient Design and Implementation of an MdbULPS in a Cloud-Computing Environment

  • Kim, Myoungjin;Cui, Yun;Lee, Hanku
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.9 no.8
    • /
    • pp.3182-3202
    • /
    • 2015
  • Flexibly expanding the storage capacity required to process a large amount of rapidly increasing unstructured log data is difficult in a conventional computing environment. In addition, implementing a log processing system providing features that categorize and analyze unstructured log data is extremely difficult. To overcome such limitations, we propose and design a MongoDB-based unstructured log processing system (MdbULPS) for collecting, categorizing, and analyzing log data generated from banks. The proposed system includes a Hadoop-based analysis module for reliable parallel-distributed processing of massive log data. Furthermore, because the Hadoop distributed file system (HDFS) stores data by generating replicas of collected log data in block units, the proposed system offers automatic system recovery against system failures and data loss. Finally, by establishing a distributed database using the NoSQL-based MongoDB, the proposed system provides methods of effectively processing unstructured log data. To evaluate the proposed system, we conducted three different performance tests on a local test bed including twelve nodes: comparing our system with a MySQL-based approach, comparing it with an Hbase-based approach, and changing the chunk size option. From the experiments, we found that our system showed better performance in processing unstructured log data.

A visual query database system for the Sample Research DB of the National Health Insurance Service (국민건강보험공단의 표본연구DB를 위한 비주얼 쿼리 데이터베이스 시스템 개발 연구)

  • Cho, Sang-Hoon;Kim, HeeChan;Kang, Gunseog
    • The Korean Journal of Applied Statistics
    • /
    • v.30 no.1
    • /
    • pp.13-24
    • /
    • 2017
  • The Sample Cohort DB supplied by the National Health Insurance Service is a valuable resource for statistical studies as well as for health and medical studies. It takes significant time and effort to extract data from this Cohort DB having a large size. As such, we introduce a database system, conveniently called the National Health Insurance Service Cohort DB Extract Tool (NICE Tool), which supports several useful operations for effectively and efficiently managing the Cohort DB. For example, researchers can extract variables and cases related with study by simply clicking a computer mouse without any prior knowledge regarding SAS DATA step or SQL. We expect that NICE Tool will facilitate the faster extraction of data and eventually lead to the active use of the Cohort DB for research purposes.

An Analysis of the Overhead of Multiple Buffer Pool Scheme on InnoDB-based Database Management Systems (InnoDB 기반 DBMS에서 다중 버퍼 풀 오버헤드 분석)

  • Song, Yongju;Lee, Minho;Eom, Young Ik
    • Journal of KIISE
    • /
    • v.43 no.11
    • /
    • pp.1216-1222
    • /
    • 2016
  • The advent of large-scale web services has resulted in gradual increase in the amount of data used in those services. These big data are managed efficiently by DBMS such as MySQL and MariaDB, which use InnoDB engine as their storage engine, since InnoDB guarantees ACID and is suitable for handling large-scale data. To improve I/O performance, InnoDB caches data and index of its database through a buffer pool. It also supports multiple buffer pools to mitigate lock contentions. However, the multiple buffer pool scheme leads to the additional data consistency overhead. In this paper, we analyze the overhead of the multiple buffer pool scheme. In our experimental results, although multiple buffer pool scheme mitigates the lock contention by up to 46.3%, throughput of DMBS is significantly degraded by up to 50.6% due to increased disk I/O and fsync calls.

Experiments of Search Query Performance for SQL-Based Open Source Databases

  • Min, Meekyung
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.10 no.2
    • /
    • pp.31-38
    • /
    • 2018
  • As the use of open source databases grows, so does need to evaluate, the performance of search queries for these databases. This paper compares the search query performance of SQL-based open source databases with commercial databases through experiments. The targets are MySql, MariaDB, and MS-SQL Server. In this study, the execution time of various types of search queries are measured. Also, search query performance was experimented according to change of index and number of tuples. Experimental results show that SQL-based open source databases have the potential to replace commercial databases when indexes are used and the number of tuples is not very large.

Development of the Unified Database Design Methodology for Big Data Applications - based on MongoDB -

  • Lee, Junho;Joo, Kyungsoo
    • Journal of the Korea Society of Computer and Information
    • /
    • v.23 no.3
    • /
    • pp.41-48
    • /
    • 2018
  • The recent sudden increase of big data has characteristics such as continuous generation of data, large amount, and unstructured format. The existing relational database technologies are inadequate to handle such big data due to the limited processing speed and the significant storage expansion cost. Current implemented solutions are mainly based on relational database that are no longer adapted to these data volume. NoSQL solutions allow us to consider new approaches for data warehousing, especially from the multidimensional data management point of view. In this paper, we develop and propose the integrated design methodology based on MongoDB for big data applications. The proposed methodology is more scalable than the existing methodology, so it is easy to handle big data.

A Study on WSDL Document Structure in Web Services (웹 서비스에서의 WSDL 문서 구문에 대한 연구)

  • Hwang Eui-Chul
    • Proceedings of the Korea Contents Association Conference
    • /
    • 2005.11a
    • /
    • pp.234-238
    • /
    • 2005
  • The WSDL is the specifications which defines the fact that must describe a web service how with the XML. It uses the WSDL and the client discovers a web service and the opening to the public function which the web service provides calling there is a possibility of doing. In this paper application shows the WSDL definition of a simple service providing Slip data process. The service supports a triple web method called WriteSlipXMLFromSql, WriteSlipXMLFromSqlProc, InsertSlipDataToDb which is deployed using the SOAP protocol over HTTP. In this paper, Our proposed web services are expected to contribute to constructing useful world wide web services which are essential in building E-Commerce society.

  • PDF

SQL Learning Tool Using TPC-H model (TPC-H 데이터모델을 이용한 SQL 교육 도구)

  • Pack, Inhye;Kim, Jieun;Jeon, Minah;Shim, Jaehee;Kang, Hyunjeong;Park, Uchang
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2011.11a
    • /
    • pp.1532-1533
    • /
    • 2011
  • 본 연구에서는 SQL를 배우고자 하는 개발자들에게 SQL 문법을 학습할 수 있는 교육용 Tool을 개발한다. 개발자가 예제와 설명을 통하여 SQL 문법을 배우고 ER-Diagram을 보면서 논리적인 DB의 개념을 이용하여 쉽게 학습할 수 있다. 예제는 초급과 중급으로 나누어져 있어 사용자의 수준에 맞는 학습이 선택가능하다. TPC-H 데이터는 DSS 환경에서 사용되는 표준 데이터 모델로 Database Generater를 통해 생성하며 본 연구에서 사용자가 데이터량의 조정이 가능하도록 구성하였다.