• Title/Summary/Keyword: SQL analysis

Search Result 142, Processing Time 0.027 seconds

Vulnerability Analysis using the Web Vulnerability Scanner (Web Vulnerability Scanner를 이용한 취약성 분석)

  • Jang, Hee-Seon
    • Convergence Security Journal
    • /
    • v.12 no.4
    • /
    • pp.71-76
    • /
    • 2012
  • As the use of Mashups, web3.0, JavaScript and AJAX(Asynchronous JavaScript XML) widely increases, the new security threats for web vulnerability also increases when the web application services are provided. In order to previously diagnose the vulnerability and prepare the threats, in this paper, the classification of security threats and requirements are presented, and the web vulnerability is analyzed for the domestic web sites using WVS(Web Vulnerability Scanner) automatic evaluation tool. From the results of vulnerability such as XSS(Cross Site Scripting) and SQL Injection, the total alerts are distributed from 0 to 31,177, mean of 411, and standard deviation of 2,563. The results also show that the web sites of 22.5% for total web sites has web vulnerability, and the previous defenses for the security threats are required.

A Study on Designing Schema for Process Repository (프로세스 리파지토리를 위한 스키마 설계에 관한 연구)

  • 이재정;정호석;홍순구
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.8 no.2
    • /
    • pp.98-107
    • /
    • 2003
  • The purpose of this study is to propose the schema for process repository that overcomes the limitations of existing process repositories in which it is not easy to compare and analyze such the unstructured data as a process model. To achieve this goal, the previous studies on process repository are reviewed, the schema for process repository is designed, and the tables for relational database are created. For the validation of the schema presented in this paper, the process model stored in relational database is examed using SQL. The contributions of this study are as follows. First, the schema for process repository storing process models is designed and validated. Second, the process model created by the new schema and stored in relational database leads to improve the performance of process analysis.

  • PDF

HBase based Business Process Event Log Schema Design of Hadoop Framework

  • Ham, Seonghun;Ahn, Hyun;Kim, Kwanghoon Pio
    • Journal of Internet Computing and Services
    • /
    • v.20 no.5
    • /
    • pp.49-55
    • /
    • 2019
  • Organizations design and operate business process models to achieve their goals efficiently and systematically. With the advancement of IT technology, the number of items that computer systems can participate in and the process becomes huge and complicated. This phenomenon created a more complex and subdivide flow of business process.The process instances that contain workcase and events are larger and have more data. This is an essential resource for process mining and is used directly in model discovery, analysis, and improvement of processes. This event log is getting bigger and broader, which leads to problems such as capacity management and I / O load in management of existing row level program or management through a relational database. In this paper, as the event log becomes big data, we have found the problem of management limit based on the existing original file or relational database. Design and apply schemes to archive and analyze large event logs through Hadoop, an open source distributed file system, and HBase, a NoSQL database system.

Development of Hydrologic Data Management System Based on Relational Database (관계형 데이터베이스를 이용한 수문자료 관리시스템 개발)

  • Kim, Hak-Kwan;Park, Seung-Woo;Kim, Sang-Min
    • Journal of Korea Water Resources Association
    • /
    • v.39 no.10 s.171
    • /
    • pp.855-866
    • /
    • 2006
  • In this paper, the Hydrologic Data Management System (HDMS) was developed for the efficient management of hydrologic data. The applicability of the system was demonstrated using the hydrologic data of study watershed located in the southwest from Suwon city MySQL 5.0, relational database management system, and MS Visual Basic 6.0 were used for the development of MS windows based HDMS. The primary components of the HDMS are data search system, data management system, and data analysis system. Data search and management systems provide basic functions for the efficient data search, storage, update and export. Data analysis system enables the users to get the further and diverse hydrologic statistical information from the stored data. Furthermore, the accuracy and quality of hydrologic data was analyzed and evaluated through data analysis system.

A Study on N-IDS Detection and Packet Analysis regarding a DoS attack (DoS공격에 대한 N-IDS 탐지 및 패킷 분석 연구)

  • Chun, Woo-Sung;Park, Dea-Woo
    • Journal of the Korea Society of Computer and Information
    • /
    • v.13 no.6
    • /
    • pp.217-224
    • /
    • 2008
  • EThis paper is study regarding banking institution and DoS attack regarding government organization which occurred in 2008. We used a tool aggressive actual DoS You install the N-IDS which used Snort in networks in order to detect a DoS attack. Storages of Winpcap and a packet to detect a packet and MySQL, HSC, to analyze. We install NET Framework etc. E-Watch etc. analyzes Packet regarding a DoS attack of a hacker and TCP, UDP etc. information, Port, MAC and IP information etc. through packet analysis tools. There is a meaning you analyze data regarding the cyber DoS, DDoS attack that is dysfunction of Ubiquitous Information Society, and it generates forensics data regarding an invader and back-tracking analysis data, and to ensure safe Internet information system through this paper study.

  • PDF

Design and Theoretical Analysis of a Stepwise Intrusion Prevention Scheme (단계적 비정상 트래픽 대응 기법 설계 및 이론적 분석)

  • Ko Kwangsun;Kang Yong-hyeog;Eom Young Ik
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.16 no.1
    • /
    • pp.55-63
    • /
    • 2006
  • Recently, there is much abnormal traffic driven by several worms, such as Nimda, Code Red, SQL Stammer, and so on, making badly severe damage to networks. Meanwhile, diverse prevention schemes for defeating abnormal traffic have been studied in the academic and commercial worlds. In this paper, we present the structure of a stepwise intrusion prevention system that is designed with the feature of putting limitation on the network bandwidth of each network traffic and dropping abnormal traffic, and then compare the proposed scheme with a pre-existing scheme, which is a True/False based an anomaly prevention scheme for several worm-patterns. There are two criteria for comparison of the schemes, which are Normal Traffic Rate (NTR) and False Positive Rate (FPR). Assuming that the abnormal traffic rate of a specific network is $\beta$ during a predefined time window, it is known that the average NTR of our stepwise intrusion prevention scheme increases by the factor of (1+$\beta$)/2 than that of True/False based anomaly prevention scheme and the average FPR of our scheme decrease by the factor of (1+$\beta$)/2.

Implementation of Reporting Tool Supporting OLAP and Data Mining Analysis Using XMLA (XMLA를 사용한 OLAP과 데이타 마이닝 분석이 가능한 리포팅 툴의 구현)

  • Choe, Jee-Woong;Kim, Myung-Ho
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.15 no.3
    • /
    • pp.154-166
    • /
    • 2009
  • Database query and reporting tools, OLAP tools and data mining tools are typical front-end tools in Business Intelligence environment which is able to support gathering, consolidating and analyzing data produced from business operation activities and provide access to the result to enterprise's users. Traditional reporting tools have an advantage of creating sophisticated dynamic reports including SQL query result sets, which look like documents produced by word processors, and publishing the reports to the Web environment, but data source for the tools is limited to RDBMS. On the other hand, OLAP tools and data mining tools have an advantage of providing powerful information analysis functions on each own way, but built-in visualization components for analysis results are limited to tables or some charts. Thus, this paper presents a system that integrates three typical front-end tools to complement one another for BI environment. Traditional reporting tools only have a query editor for generating SQL statements to bring data from RDBMS. However, the reporting tool presented by this paper can extract data also from OLAP and data mining servers, because editors for OLAP and data mining query requests are added into this tool. Traditional systems produce all documents in the server side. This structure enables reporting tools to avoid repetitive process to generate documents, when many clients intend to access the same dynamic document. But, because this system targets that a few users generate documents for data analysis, this tool generates documents at the client side. Therefore, the tool has a processing mechanism to deal with a number of data despite the limited memory capacity of the report viewer in the client side. Also, this reporting tool has data structure for integrating data from three kinds of data sources into one document. Finally, most of traditional front-end tools for BI are dependent on data source architecture from specific vendor. To overcome the problem, this system uses XMLA that is a protocol based on web service to access to data sources for OLAP and data mining services from various vendors.

DEVELOPMENT AND VALIDATION OF A NUCLEAR FUEL CYCLE ANALYSIS TOOL: A FUTURE CODE

  • Kim, S.K.;Ko, W.I.;Lee, Yoon Hee
    • Nuclear Engineering and Technology
    • /
    • v.45 no.5
    • /
    • pp.665-674
    • /
    • 2013
  • This paper presents the development and validation methods of the FUTURE (FUel cycle analysis Tool for nUcleaR Energy) code, which was developed for a dynamic material flow evaluation and economic analysis of the nuclear fuel cycle. This code enables an evaluation of a nuclear material flow and its economy for diverse nuclear fuel cycles based on a predictable scenario. The most notable virtue of this FUTURE code, which was developed using C# and MICROSOFT SQL DBMS, is that a program user can design a nuclear fuel cycle process easily using a standard process on the canvas screen through a drag-and-drop method. From the user's point of view, this code is very easy to use thanks to its high flexibility. In addition, the new code also enables the maintenance of data integrity by constructing a database environment of the results of the nuclear fuel cycle analyses.

New Approach to the Analysis of Palindromic Structure in Genome Sequences

  • Kim, Seok-Won;Lee, Yong-Seok;Choi, Sang-Haeng;Chae, Sung-Hwa;Kim, Dae-Won;Park, Hong-Seog
    • Genomics & Informatics
    • /
    • v.4 no.4
    • /
    • pp.167-169
    • /
    • 2006
  • PABAP (Palindrome Analysis by BLAST Program) is an analysis system that identifies palindromic sequences from a large genome sequence up to several megabases long. It uses NCBI BLAST as a searching engine, and data processing such as alignment filtration and detection of inverted repeats which satisfy user-defined parameters is performed by manipulating data after populating into a MySQL database. PABAP outperforms publicly available palindrome search program in that it can detect large palindrome with internal spacer at a faster speed from bacterial genomes. It is a standalone application and is freely available for noncommercial users.

Detecting Security Vulnerabilities in TypeScript Code with Static Taint Analysis (정적 오염 분석을 활용한 타입스크립트 코드의 보안 취약점 탐지)

  • Moon, Taegeun;Kim, Hyoungshick
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.31 no.2
    • /
    • pp.263-277
    • /
    • 2021
  • Taint analysis techniques are popularly used to detect web vulnerabilities originating from unverified user input data, such as Cross-Site Scripting (XSS) and SQL Injection, in web applications written in JavaScript. To detect such vulnerabilities, it would be necessary to trace variables affected by user-submitted inputs. However, because of the dynamic nature of JavaScript, it has been a challenging issue to identify those variables without running the web application code. Therefore, most existing taint analysis tools have been developed based on dynamic taint analysis, which requires the overhead of running the target application. In this paper, we propose a novel static taint analysis technique using symbol information obtained from the TypeScript (a superset of JavaScript) compiler to accurately track data flow and detect security vulnerabilities in TypeScript code. Our proposed technique allows developers to annotate variables that can contain unverified user input data, and uses the annotation information to trace variables and data affected by user input data. Since our proposed technique can seamlessly be incorporated into the TypeScript compiler, developers can find vulnerabilities during the development process, unlike existing analysis tools performed as a separate tool. To show the feasibility of the proposed method, we implemented a prototype and evaluated its performance with 8 web applications with known security vulnerabilities. We found that our prototype implementation could detect all known security vulnerabilities correctly.