• Title/Summary/Keyword: SQL 분석

Search Result 234, Processing Time 0.03 seconds

Design and Implementation of SQL Injection attack prevention code conversion application (SQL Injection 공격 방지를 위한 코드 변환 애플리케이션 설계 및 구현)

  • Ha, Man-Seok;Park, Soo-Hyun
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2014.04a
    • /
    • pp.441-444
    • /
    • 2014
  • 인터넷의 보급에 따른 신속정확하고 편리한 정보처리의 장점에도 불구하고 최근 들어 급증하고 있는 보안 관련 사고들로 인하여 개인정보 및 기업정보의 관리에 대한 대책 마련이 시급한 가운데 있다. 그 중에서도 SQL 삽입 공격에 의한 악의적인 관리자 권한 획득 및 비정상적인 로그인 등으로 인하여 많은 피해가 발생하고 있다. 현재 SQL Injection에 관련된 대부분의 연구는 공격을 탐지하는 방법에 초점이 맞추어져 있다. 본 논문에서는 프로그램 코드를 분석하여 따옴표가 포함된 취약한 인라인 SQL 쿼리 구문을 찾아서 매개변수화된 쿼리로 변경하는 기능을 제공함으로써 근본적인 해결책을 찾고자 하였으며 Java, C#.net 등 다양한 언어를 지원하여 개발 업무에서의 활용성을 높이고자 하였다.

A Study on Web SQL Database technology using Ubiquitous Fire Prevention Monitoring System (Web SQL Database 기술을 이용한 유비쿼터스 화재 방재 모니터링 시스템 연구)

  • Kang, Seung-Gu;Kim, Young-Hyuk;Lim, Il-Kwon;Gui, Li Qi;Lee, Jun-Woo;Kim, Myung-Jin;Lee, Jae-Kwang
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2011.04a
    • /
    • pp.1268-1271
    • /
    • 2011
  • 유비쿼터스 화재 방재 시스템은 온도, 습도, CO, CO2 등의 다양한 센서로부터 얻는 값을 이용해 화재를 판별하여 관리자에게 전달하고 시스템 설정 값에 따라 소화설비를 동작시키는 지능형 화재 탐지 시스템이다. 기존의 시스템은 센서로부터 얻는 값을 서버의 데이터베이스에 저장하였으며, 모니터링 시스템은 모든 정보를 MS-SQL, MySql, Oracle 등의 서버측의 데이터베이스에 의존하였다. 앞선 방법들을 분석하고, 서버의 부하를 줄이기 위하여 HTML5부터 지원하는 Web SQL Database를 이용하는 방법을 제안한다.

Design and Implementation of MongoDB-based Unstructured Log Processing System over Cloud Computing Environment (클라우드 환경에서 MongoDB 기반의 비정형 로그 처리 시스템 설계 및 구현)

  • Kim, Myoungjin;Han, Seungho;Cui, Yun;Lee, Hanku
    • Journal of Internet Computing and Services
    • /
    • v.14 no.6
    • /
    • pp.71-84
    • /
    • 2013
  • Log data, which record the multitude of information created when operating computer systems, are utilized in many processes, from carrying out computer system inspection and process optimization to providing customized user optimization. In this paper, we propose a MongoDB-based unstructured log processing system in a cloud environment for processing the massive amount of log data of banks. Most of the log data generated during banking operations come from handling a client's business. Therefore, in order to gather, store, categorize, and analyze the log data generated while processing the client's business, a separate log data processing system needs to be established. However, the realization of flexible storage expansion functions for processing a massive amount of unstructured log data and executing a considerable number of functions to categorize and analyze the stored unstructured log data is difficult in existing computer environments. Thus, in this study, we use cloud computing technology to realize a cloud-based log data processing system for processing unstructured log data that are difficult to process using the existing computing infrastructure's analysis tools and management system. The proposed system uses the IaaS (Infrastructure as a Service) cloud environment to provide a flexible expansion of computing resources and includes the ability to flexibly expand resources such as storage space and memory under conditions such as extended storage or rapid increase in log data. Moreover, to overcome the processing limits of the existing analysis tool when a real-time analysis of the aggregated unstructured log data is required, the proposed system includes a Hadoop-based analysis module for quick and reliable parallel-distributed processing of the massive amount of log data. Furthermore, because the HDFS (Hadoop Distributed File System) stores data by generating copies of the block units of the aggregated log data, the proposed system offers automatic restore functions for the system to continually operate after it recovers from a malfunction. Finally, by establishing a distributed database using the NoSQL-based Mongo DB, the proposed system provides methods of effectively processing unstructured log data. Relational databases such as the MySQL databases have complex schemas that are inappropriate for processing unstructured log data. Further, strict schemas like those of relational databases cannot expand nodes in the case wherein the stored data are distributed to various nodes when the amount of data rapidly increases. NoSQL does not provide the complex computations that relational databases may provide but can easily expand the database through node dispersion when the amount of data increases rapidly; it is a non-relational database with an appropriate structure for processing unstructured data. The data models of the NoSQL are usually classified as Key-Value, column-oriented, and document-oriented types. Of these, the representative document-oriented data model, MongoDB, which has a free schema structure, is used in the proposed system. MongoDB is introduced to the proposed system because it makes it easy to process unstructured log data through a flexible schema structure, facilitates flexible node expansion when the amount of data is rapidly increasing, and provides an Auto-Sharding function that automatically expands storage. The proposed system is composed of a log collector module, a log graph generator module, a MongoDB module, a Hadoop-based analysis module, and a MySQL module. When the log data generated over the entire client business process of each bank are sent to the cloud server, the log collector module collects and classifies data according to the type of log data and distributes it to the MongoDB module and the MySQL module. The log graph generator module generates the results of the log analysis of the MongoDB module, Hadoop-based analysis module, and the MySQL module per analysis time and type of the aggregated log data, and provides them to the user through a web interface. Log data that require a real-time log data analysis are stored in the MySQL module and provided real-time by the log graph generator module. The aggregated log data per unit time are stored in the MongoDB module and plotted in a graph according to the user's various analysis conditions. The aggregated log data in the MongoDB module are parallel-distributed and processed by the Hadoop-based analysis module. A comparative evaluation is carried out against a log data processing system that uses only MySQL for inserting log data and estimating query performance; this evaluation proves the proposed system's superiority. Moreover, an optimal chunk size is confirmed through the log data insert performance evaluation of MongoDB for various chunk sizes.

NoSQL-based Sensor Web System for Fine Particles Analysis Services (미세먼지 분석 서비스를 위한 NoSQL 기반 센서 웹 시스템)

  • Kim, Jeong-Joon;Kwak, Kwang-Jin;Park, Jeong-Min
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.19 no.2
    • /
    • pp.119-125
    • /
    • 2019
  • Recently, it has become a social problem due to fine particles. There are more people wearing masks, weather alerts and disaster notices. Research and policy are actively underway. Meteorologically, the biggest damage caused by fine particles is the inversion layer phenomenon. In this study, we designed a system to warn fine Particles by analyzing inversion layer and wind direction. This weather information system proposes a system that can efficiently perform scalability and parallel processing by using OGC sensor web enablement system and NoSQL storage for sensor control and data exchange.

External Merge Sorting in Tajo with Variable Server Configuration (매개변수 환경설정에 따른 타조의 외부합병정렬 성능 연구)

  • Lee, Jongbaeg;Kang, Woon-hak;Lee, Sang-won
    • Journal of KIISE
    • /
    • v.43 no.7
    • /
    • pp.820-826
    • /
    • 2016
  • There is a growing requirement for big data processing which extracts valuable information from a large amount of data. The Hadoop system employs the MapReduce framework to process big data. However, MapReduce has limitations such as inflexible and slow data processing. To overcome these drawbacks, SQL query processing techniques known as SQL-on-Hadoop were developed. Apache Tajo, one of the SQL-on-Hadoop techniques, was developed by a Korean development group. External merge sort is one of the heavily used algorithms in Tajo for query processing. The performance of external merge sort in Tajo is influenced by two parameters, sort buffer size and fanout. In this paper, we analyzed the performance of external merge sort in Tajo with various sort buffer sizes and fanouts. In addition, we figured out that there are two major causes of differences in the performance of external merge sort: CPU cache misses which increase as the sort buffer size grows; and the number of merge passes determined by fanout.

Finding Frequent Route of Taxi Trip Events Based on MapReduce and MongoDB (택시 데이터에 대한 효율적인 Top-K 빈도 검색)

  • Putri, Fadhilah Kurnia;An, Seonga;Purnaningtyas, Magdalena Trie;Jeong, Han-You;Kwon, Joonho
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.4 no.9
    • /
    • pp.347-356
    • /
    • 2015
  • Due to the rapid development of IoT(Internet of Things) technology, traditional taxis are connected through dispatchers and location systems. Typically, modern taxis have embedded with GPS(Global Positioning System), which aims for obtaining the route information. By analyzing the frequency of taxi trip events, we can find the frequent route for a given query time. However, a scalability problem would occur when we convert the raw location data of taxi trip events into the analyzed frequency information due to the volume of location data. For this problem, we propose a NoSQL based top-K query system for taxi trip events. First, we analyze raw taxi trip events and extract frequencies of all routes. Then, we store the frequency information into hash-based index structure of MongoDB which is a document-oriented NoSQL database. Efficient top-K query processing for frequent route is done with the top of the MongoDB. We validate the efficiency of our algorithms by using real taxi trip events of New York City.

Integration of SQL based Databases into World Wide Web (Web을 이용한 SQL 데이터베이스 통합 기술)

  • Kim, Mi-Hui;Im, Yeon-Ho;Park, Chan-Beom
    • Electronics and Telecommunications Trends
    • /
    • v.11 no.1 s.39
    • /
    • pp.1-8
    • /
    • 1996
  • 웹 서비스가 인터넷을 주도하면서 Hyper Text Markup Language(HTML) 문서 위주의 정보검색 서비스에서 한 발 더 나아가 SQL 데이터베이스와 웹을 통합하여 이를 비즈니스 분야로 확대 발전시키기 위한 연구가 진행되고 있다. 실제로 많은 웹 서버에서 Common Gateway Interface(CGI) 기능을 데이터베이스 검색에 활용하고 있다. 이와 함께 GCI를 사용자 입장에서 한 단계 발전시킨 WWW interface to DataBase(WDB), Gateway Structure Query Language(GSQL) 등이 인터넷에서 소개되고 있다. 본 고에서는 현재 다각도로 진행중인 웹과 데이터베이스 통합기술 동향을 CGI와 WDB를 통해 살펴보았다.

Query optimizing Efficient Extracting Warehouse use PL/SQL for Analysis Data in data (데이터웨어하우스에서 효율적인 분석데이터 추출시 PL/SQL을 이용한 질의 최적화)

  • Jeong, Seung-Kyung
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2006.11a
    • /
    • pp.313-316
    • /
    • 2006
  • 기존에 기업의 의사결정을 위해 사용되는 데이터 웨어하우스나 데이터 마트에서 성능향상을 위한 많은 연구들이 있었다. 하지만 복잡한 업무요건이 추가되고 기업의 요구가 다양해짐에 따라 RDBMS의 성능은 점점 낮아지고 이를 해결하기 위한 필요성이 요구되었다. 따라서 본 논문에서는 이를 해결하기 위해 버퍼기능을 하는 PL/SQL Package를 구현하여 효율적인 질의 최적화 방법을 제안하고자 한다. 그리고 본 논문에서 제안한 방법이 기존의 방법보다 성능이 좋다는 것을 실험을 통해 증명해 보였다.

  • PDF

The Method of Deleted Record Recovery for MySQL MyISAM Database (MySQL MyISAM 데이터베이스의 삭제 레코드에 대한 복구 기법)

  • Noh, Woo-seon;Jang, Sung-min;Kang, Chul-hoon;Lee, Kyung-min;Lee, Sang-jin
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.26 no.1
    • /
    • pp.125-134
    • /
    • 2016
  • MySQL database is currently used by many users and It has gained a big market share in the database market. Even though MyISAM storage engine was used as a default storage engine before, but records recovery method does not existed. Deleted records have a high possibility for important evidence and it is almost impossible to determine that investigators manually examine large amounts of database directly. This paper suggests the universal recovery method for deleted records and presents the experimental results.

Comparison of DBMS Performance for processing Small Scale Database (소용량 데이터베이스 처리를 위한 DBMS의 성능 비교)

  • Jang, Si-Woong
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2008.10a
    • /
    • pp.139-142
    • /
    • 2008
  • While a lot of comparisons of DBMS performance for processing large scale database are given as results of bench-mark tests, there are few comparisons of DBMS performance for processing small scale database. Therefore, in this study, we compared and analyzed on the performance of commercial DBMS and public DBMS for small scale database. Analysis results show that while Oracle has low performance on the operations of update and insert due to the overhead of rollback for data safety, MySQL and MS-SQL have good performance without additional overhead.

  • PDF