• Title/Summary/Keyword: SQL analysis

Search Result 142, Processing Time 0.029 seconds

Development of an OODBMS Functionality Testing Tool Prototype. (객체지향 DBMS 기능 시험 도구의 프로토타입 개발)

  • 김은영;이상호;전성택
    • The Journal of Information Technology and Database
    • /
    • v.2 no.2
    • /
    • pp.25-34
    • /
    • 1995
  • In this paper, we present design philosophy and implementation issues of a functionality testing tool for object-oriented database systems. A testing tool has been developed to validate UniSQL/X functionalities with C++ interface. A testing tool is designed under consideration of scaleability, simplicity and extendibility. The schema is deliberately constructed to verify the object-oriented functionalities such as abstraction, inheritance and aggregation. Each test item has been derived under various black box techniques such as equivalent partitioning and boundary-value analysis. The testing tool consists of six phases, namely, database creation, database population, construction of testindex, compilation and link, execution and result reporting, and final cleanup. The prototype provides more than 140 test items at 90 programs.

  • PDF

A Database Security System for Detailed Access Control and Safe Data Management (상세 접근 통제와 안전한 데이터 관리를 위한 데이터베이스 보안 시스템)

  • Cho, Eun-Ae;Moon, Chang-Joo;Park, Dae-Ha;Hong, Sung-Jin;Baik, Doo-Kwon
    • Journal of KIISE:Databases
    • /
    • v.36 no.5
    • /
    • pp.352-365
    • /
    • 2009
  • Recently, data access control policies have not been applied for authorized or unauthorized persons properly and information leakage incidents have occurred due to database security vulnerabilities. In the traditional database access control methods, administrators grant permissions for accessing database objects to users. However, these methods couldn't be applied for diverse access control policies to the database. In addition, another database security method which uses data encryption is difficult to utilize data indexing. Thus, this paper proposes an enhanced database access control system via a packet analysis method between client and database server in network to apply diverse security policies. The proposed security system can be applied the applications with access control policies related to specific factors such as date, time, SQL string, the number of result data and etc. And it also assures integrity via a public key certificate and MAC (Message Authentication Code) to prevent modification of user information and query sentences.

Graph Database Benchmarking Systems Supporting Diversity (다양성을 지원하는 그래프 데이터베이스 벤치마킹 시스템)

  • Choi, Do-Jin;Baek, Yeon-Hee;Lee, So-Min;Kim, Yun-A;Kim, Nam-Young;Choi, Jae-Young;Lee, Hyeon-Byeong;Lim, Jong-Tae;Bok, Kyoung-Soo;Song, Seok-Il;Yoo, Jae-Soo
    • The Journal of the Korea Contents Association
    • /
    • v.21 no.12
    • /
    • pp.84-94
    • /
    • 2021
  • Graph databases have been developed to efficiently store and query graph data composed of vertices and edges to express relationships between objects. Since the query types of graph database show very different characteristics from traditional NoSQL databases, benchmarking tools suitable for graph databases to verify the performance of the graph database are needed. In this paper, we propose an efficient graph database benchmarking system that supports diversity in graph inputs and queries. The proposed system utilizes OrientDB to conduct benchmarking for graph databases. In order to support the diversity of input graphs and query graphs, we use LDBC that is an existing graph data generation tool. We demonstrate the feasibility and effectiveness of the proposed scheme through analysis of benchmarking results. As a result of performance evaluation, it has been shown that the proposed system can generate customizable synthetic graph data, and benchmarking can be performed based on the generated graph data.

ERD Representation using Auto-Generated Form and SQL (자동 생성 폼과 SQL을 이용한 ERD 표현)

  • Ra, Young-Gook
    • The Journal of the Korea Contents Association
    • /
    • v.9 no.5
    • /
    • pp.61-75
    • /
    • 2009
  • Generally, the development of the database application includes the requirement analysis phase of creating ERD (Entity Relationship Diagram) and process models, coding, and testing. From the above phases, the analysis phase is not most formalized. It is usually hard task because (1) customers don't know the details of the desired system; (2) developers can't with ease understand the business logic of the customers; (3) the outcomes of the analysis, which are ERD and process models, are not easy to understand to the customers. This paper propose that the executional forms, which are better to understand the systems, should be presented to the customers instead of the ERD. These forms should accept the data input so that customers can review the various aspects of the outcome models. The developers should be able to instantly implement the business logic and also should be able to visually demonstrate the logic in order to get the details of it. For this goal, the customer supplied business logic should be able to be implemented by the references between forms, actions, constraints from the perspective of the data flow. The customers try to execute the forms implementing the business logic and review their supplied logic find new necessary business logic of their own. Iterating these processes for the requirement analysis would result in the success of the analysis which is sufficiently detailed without conflicts.

ILVA: Integrated audit-log analysis tool and its application. (시스템 보안 강화를 위한 로그 분석 도구 ILVA와 실제 적용 사례)

  • 차성덕
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.9 no.3
    • /
    • pp.13-26
    • /
    • 1999
  • Widespread use of Internet despite numerous positive aspects resulted in increased number of system intrusions and the need for enhanced security mechanisms is urgent. Systematic collection and analysis of log data are essential in intrusion investigation. Unfortunately existing logs are stored in diverse and incompatible format thus making an automated intrusion investigation practically impossible. We examined the types of log data essential in intrusion investigation and implemented a tool to enable systematic collection and efficient analysis of voluminous log data. Our tool based on RBDMS and SQL provides graphical and user-friendly interface. We describe our experience of using the tool in actual intrusion investigation and explain how our tool can be further enhanced.

The Model of Network Packet Analysis based on Big Data (빅 데이터 기반의 네트워크 패킷 분석 모델)

  • Choi, Bomin;Kong, Jong-Hwan;Han, Myung-Mook
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.23 no.5
    • /
    • pp.392-399
    • /
    • 2013
  • Due to the development of IT technology and the information age, a dependency of the network over the most of our lives have grown to a greater extent. Although it provides us to get various useful information and service, it also has negative effectiveness that can provide network intruder with vulnerable roots. In other words, we need to urgently cope with theses serious security problem causing service disableness or system connected to network obstacle with exploiting various packet information. Many experts in a field of security are making an effort to develop the various security solutions to respond against these threats, but existing solutions have a lot of problems such as lack of storage capacity and performance degradation along with the massive increase of packet data volume. Therefore we propose the packet analysis model to apply issuing Big Data technology in the field of security. That is, we used NoSQL which is technology of massive data storage to collect the packet data growing massive and implemented the packet analysis model based on K-means clustering using MapReudce which is distributed programming framework, and then we have shown its high performance by experimenting.

Study of MongoDB Architecture by Data Complexity for Big Data Analysis System (빅데이터 분석 시스템 구현을 위한 데이터 구조의 복잡성에 따른 MongoDB 환경 구성 연구)

  • Hyeopgeon Lee;Young-Woon Kim;Jin-Woo Lee;Seong Hyun Lee
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.16 no.5
    • /
    • pp.354-361
    • /
    • 2023
  • Big data analysis systems apply NoSQL databases like MongoDB to store, process, and analyze diverse forms of large-scale data. MongoDB offers scalability and fast data processing speeds through distributed processing and data replication, depending on its configuration. This paper investigates the suitable MongoDB environment configurations for implementing big data analysis systems. For performance evaluation, we configured both single-node and multi-node environments. In the multi-node setup, we expanded the number of data nodes from two to three and measured the performance in each environment. According to the analysis, the processing speeds for complex data structures with three or more dimensions are approximately 5.75% faster in the single-node environment compared to an environment with two data nodes. However, a setting with three data nodes processes data about 25.15% faster than the single-node environment. On the other hand, for simple one-dimensional data structures, the multi-node environment processes data approximately 28.63% faster than the single-node environment. Further research is needed to practically validate these findings with diverse data structures and large volumes of data.

A Study on the Implementation of SQL Primitives for Decision Tree Classification (판단 트리 분류를 위한 SQL 기초 기능의 구현에 관한 연구)

  • An, Hyoung Geun;Koh, Jae Jin
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.2 no.12
    • /
    • pp.855-864
    • /
    • 2013
  • Decision tree classification is one of the important problems in data mining fields and data minings have been important tasks in the fields of large database technologies. Therefore the coupling efforts of data mining systems and database systems have led the developments of database primitives supporting data mining functions such as decision tree classification. These primitives consist of the special database operations which support the SQL implementation of decision tree classification algorithms. These primitives have become the consisting modules of database systems for the implementations of the specific algorithms. There are two aspects in the developments of database primitives which support the data mining functions. The first is the identification of database common primitives which support data mining functions by analysis. The other is the provision of the extended mechanism for the implementations of these primitives as an interface of database systems. In data mining, some primitives want be stored in DBMS is one of the difficult problems. In this paper, to solve of the problem, we describe the database primitives which construct and apply the optimized decision tree classifiers. Then we identify the useful operations for various classification algorithms and discuss the implementations of these primitives on the commercial DBMS. We implement these primitives on the commercial DBMS and present experimental results demonstrating the performance comparisons.

Development of educational programs for managing medical information utilizing medical data generation and analysis techniques (의료 데이터 발생과 분석기술을 활용한 의료정보관리 교육용 프로그램 개발)

  • Choi, Joonyoung
    • Journal of Digital Convergence
    • /
    • v.15 no.10
    • /
    • pp.377-386
    • /
    • 2017
  • This study has developed a medical information management educational program that can improve the management ability of medical information. The educational medical information management program was developed for 8mnths uing VB. The database utilized the ACCESS Database, which allows learners to easily understand and understand the structure of the data. The learners enter data in the discharge analysis and the cancer registration program and the incomplete program after analyze the medical records. After entering and saving data, medical information management programs can be used to understand and analyze the structure of the database to generate medical information. The educational programs can improve the ability of learners to manage medical information by extracting the necessary data from the database directly through SQL and creating various medical information. However, although the medical information management program is an educational program, there is no evaluation system for the learners program operation. Accordingly, the next studies should develop the assessment system of the medical information management program for learners evaluation.

OLAP System and Performance Evaluation for Analyzing Web Log Data (웹 로그 분석을 위한 OLAP 시스템 및 성능 평가)

  • 김지현;용환승
    • Journal of Korea Multimedia Society
    • /
    • v.6 no.5
    • /
    • pp.909-920
    • /
    • 2003
  • Nowadays, IT for CRM has been growing and developed rapidly. Typical techniques are statistical analysis tools, on-line multidimensional analytical processing (OLAP) tools, and data mining algorithms (such neural networks, decision trees, and association rules). Among customer data, web log data is very important and to use these data efficiently, applying OLAP technology to analyze multi-dimensionally. To make OLAP cube, we have to precalculate multidimensional summary results in order to get fast response. But as the number of dimensions and sparse cells increases, data explosion occurs seriously and the performance of OLAP decreases. In this paper, we presented why the web log data sparsity occurs and then what kinds of sparsity patterns generate in the two and t.he three dimensions for OLAP. Based on this research, we set up the multidimensional data models and query models for benchmark with each sparsity patterns. Finally, we evaluated the performance of three OLAP systems (MS SQL 2000 Analysis Service, Oracle Express and C-MOLAP).

  • PDF