• Title/Summary/Keyword: software algorithms

Search Result 1,093, Processing Time 0.028 seconds

An Implementation of Network Intrusion Detection Engines on Network Processors (네트워크 프로세서 기반 고성능 네트워크 침입 탐지 엔진에 관한 연구)

  • Cho, Hye-Young;Kim, Dae-Young
    • Journal of KIISE:Information Networking
    • /
    • v.33 no.2
    • /
    • pp.113-130
    • /
    • 2006
  • Recently with the explosive growth of Internet applications, the attacks of hackers on network are increasing rapidly and becoming more seriously. Thus information security is emerging as a critical factor in designing a network system and much attention is paid to Network Intrusion Detection System (NIDS), which detects hackers' attacks on network and handles them properly However, the performance of current intrusion detection system cannot catch the increasing rate of the Internet speed because most of the NIDSs are implemented by software. In this paper, we propose a new high performance network intrusion using Network Processor. To achieve fast packet processing and dynamic adaptation of intrusion patterns that are continuously added, a new high performance network intrusion detection system using Intel's network processor, IXP1200, is proposed. Unlike traditional intrusion detection engines, which have been implemented by either software or hardware so far, we design an optimized architecture and algorithms, exploiting the features of network processor. In addition, for more efficient detection engine scheduling, we proposed task allocation methods on multi-processing processors. Through implementation and performance evaluation, we show the proprieties of the proposed approach.

An Effective Data Analysis System for Improving Throughput of Shotgun Proteomic Data based on Machine Learning (대량의 프로테옴 데이타를 효과적으로 해석하기 위한 기계학습 기반 시스템)

  • Na, Seung-Jin;Paek, Eun-Ok
    • Journal of KIISE:Software and Applications
    • /
    • v.34 no.10
    • /
    • pp.889-899
    • /
    • 2007
  • In proteomics, recent advancements In mass spectrometry technology and in protein extraction and separation technology made high-throughput analysis possible. This leads to thousands to hundreds of thousands of MS/MS spectra per single LC-MS/MS experiment. Such a large amount of data creates significant computational challenges and therefore effective data analysis methods that make efficient use of computational resources and, at the same time, provide more peptide identifications are in great need. Here, SIFTER system is designed to avoid inefficient processing of shotgun proteomic data. SIFTER provides software tools that can improve throughput of mass spectrometry-based peptide identification by filtering out poor-quality tandem mass spectra and estimating a Peptide charge state prior to applying analysis algorithms. SIFTER tools characterize and assess spectral features and thus significantly reduce the computation time and false positive rates by localizing spectra that lead to wrong identification prior to full-blown analysis. SIFTER enables fast and in-depth interpretation of tandem mass spectra.

Identification of Fuzzy Inference System Based on Information Granulation

  • Huang, Wei;Ding, Lixin;Oh, Sung-Kwun;Jeong, Chang-Won;Joo, Su-Chong
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.4 no.4
    • /
    • pp.575-594
    • /
    • 2010
  • In this study, we propose a space search algorithm (SSA) and then introduce a hybrid optimization of fuzzy inference systems based on SSA and information granulation (IG). In comparison with "conventional" evolutionary algorithms (such as PSO), SSA leads no.t only to better search performance to find global optimization but is also more computationally effective when dealing with the optimization of the fuzzy models. In the hybrid optimization of fuzzy inference system, SSA is exploited to carry out the parametric optimization of the fuzzy model as well as to realize its structural optimization. IG realized with the aid of C-Means clustering helps determine the initial values of the apex parameters of the membership function of fuzzy model. The overall hybrid identification of fuzzy inference systems comes in the form of two optimization mechanisms: structure identification (such as the number of input variables to be used, a specific subset of input variables, the number of membership functions, and polyno.mial type) and parameter identification (viz. the apexes of membership function). The structure identification is developed by SSA and C-Means while the parameter estimation is realized via SSA and a standard least square method. The evaluation of the performance of the proposed model was carried out by using four representative numerical examples such as No.n-linear function, gas furnace, NO.x emission process data, and Mackey-Glass time series. A comparative study of SSA and PSO demonstrates that SSA leads to improved performance both in terms of the quality of the model and the computing time required. The proposed model is also contrasted with the quality of some "conventional" fuzzy models already encountered in the literature.

User control based OTT content search algorithms (사용자 제어기반 OTT 콘텐츠 검색 알고리즘)

  • Kim, Ki-Young;Suh, Yu-Hwa;Park, Byung-Joon
    • Journal of the Korea Society of Computer and Information
    • /
    • v.20 no.5
    • /
    • pp.99-106
    • /
    • 2015
  • This research is focused on the development of the proprietary database embedded in the OTT device, which is used for searching and indexing video contents, and also the development of the search algorithm in the form of the critical components of the interface application with the OTT's database to provide video query searching, such as remote control smartphone application. As the number of available channels has increased to anywhere from dozens to hundreds of channels, it has become increasingly difficult for the viewer to find programs they want to watch. To address this issue, content providers are now in need of methods to recommend programs catering to each viewer's preference. the present study aims provide of the algorithm which recommends contents of OTT program by analyzing personal watching pattern based on one's history.

Applicability of Geo-spatial Processing Open Sources to Geographic Object-based Image Analysis (GEOBIA)

  • Lee, Ki-Won;Kang, Sang-Goo
    • Korean Journal of Remote Sensing
    • /
    • v.27 no.3
    • /
    • pp.379-388
    • /
    • 2011
  • At present, GEOBIA (Geographic Object-based Image Analysis), heir of OBIA (Object-based Image Analysis), is regarded as an important methodology by object-oriented paradigm for remote sensing, dealing with geo-objects related to image segmentation and classification in the different view point of pixel-based processing. This also helps to directly link to GIS applications. Thus, GEOBIA software is on the booming. The main theme of this study is to look into the applicability of geo-spatial processing open source to GEOBIA. However, there is no few fully featured open source for GEOBIA which needs complicated schemes and algorithms, till It was carried out to implement a preliminary system for GEOBIA running an integrated and user-oriented environment. This work was performed by using various open sources such as OTB or PostgreSQL/PostGIS. Some points are different from the widely-used proprietary GEOBIA software. In this system, geo-objects are not file-based ones, but tightly linked with GIS layers in spatial database management system. The mean shift algorithm with parameters associated with spatial similarities or homogeneities is used for image segmentation. For classification process in this work, tree-based model of hierarchical network composing parent and child nodes is implemented by attribute join in the semi-automatic mode, unlike traditional image-based classification. Of course, this integrated GEOBIA system is on the progressing stage, and further works are necessary. It is expected that this approach helps to develop and to extend new applications such as urban mapping or change detection linked to GIS data sets using GEOBIA.

Intelligent Hybrid Fusion Algorithm with Vision Patterns for Generation of Precise Digital Road Maps in Self-driving Vehicles

  • Jung, Juho;Park, Manbok;Cho, Kuk;Mun, Cheol;Ahn, Junho
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.10
    • /
    • pp.3955-3971
    • /
    • 2020
  • Due to the significant increase in the use of autonomous car technology, it is essential to integrate this technology with high-precision digital map data containing more precise and accurate roadway information, as compared to existing conventional map resources, to ensure the safety of self-driving operations. While existing map technologies may assist vehicles in identifying their locations via Global Positioning System, it is however difficult to update the environmental changes of roadways in these maps. Roadway vision algorithms can be useful for building autonomous vehicles that can avoid accidents and detect real-time location changes. We incorporate a hybrid architectural design that combines unsupervised classification of vision data with supervised joint fusion classification to achieve a better noise-resistant algorithm. We identify, via a deep learning approach, an intelligent hybrid fusion algorithm for fusing multimodal vision feature data for roadway classifications and characterize its improvement in accuracy over unsupervised identifications using image processing and supervised vision classifiers. We analyzed over 93,000 vision frame data collected from a test vehicle in real roadways. The performance indicators of the proposed hybrid fusion algorithm are successfully evaluated for the generation of roadway digital maps for autonomous vehicles, with a recall of 0.94, precision of 0.96, and accuracy of 0.92.

Reverse Simulation Software Architecture for Required Performance Analysis of Defense System (국방 시스템의 요구 성능 분석을 위한 역 방향 시뮬레이션 소프트웨어 아키텍처)

  • Hong, Jeong Hee;Seo, Kyung-Min;Kim, Tag Gon
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.40 no.4
    • /
    • pp.750-759
    • /
    • 2015
  • This paper focuses on reverse simulation methods to find and analyze the required performance of a defense system under a given combat effectiveness. Our approach is motivated that forward simulation, that traditionally employs the effectiveness analysis of performance alternatives, is not suitable for resolving the above issue because it causes a high computational cost due to repeating simulations of all possible alternatives. To this end, the paper proposes a reverse simulation software architecture, which consists of several functional sub-modules that facilitate two types of reverse simulations according to possibility of inverse model design. The proposed architecture also enable to apply various search algorithms to find required operational capability efficiently. With this architecture, we performed two case studies about underwater and anti-air warfare scenarios. The case studies show that the proposed reverse simulation incurs a smaller computational cost, while finding the same level of performance alternatives compared with traditional forward simulation. Finally we expect that this study provides a guide those who desire to make decisions about new defense systems development.

Developing a Dynamic Materialized View Index for Efficiently Discovering Usable Views for Progressive Queries

  • Zhu, Chao;Zhu, Qiang;Zuzarte, Calisto;Ma, Wenbin
    • Journal of Information Processing Systems
    • /
    • v.9 no.4
    • /
    • pp.511-537
    • /
    • 2013
  • Numerous data intensive applications demand the efficient processing of a new type of query, which is called a progressive query (PQ). A PQ consists of a set of unpredictable but inter-related step-queries (SQ) that are specified by its user in a sequence of steps. A conventional DBMS was not designed to efficiently process such PQs. In our earlier work, we introduced a materialized view based approach for efficiently processing PQs, where the focus was on selecting promising views for materialization. The problem of how to efficiently find usable views from the materialized set in order to answer the SQs for a PQ remains open. In this paper, we present a new index technique, called the Dynamic Materialized View Index (DMVI), to rapidly discover usable views for answering a given SQ. The structure of the proposed index is a special ordered tree where the SQ domain tables are used as search keys and some bitmaps are kept at the leaf nodes for refined filtering. A two-level priority rule is adopted to order domain tables in the tree, which facilitates the efficient maintenance of the tree by taking into account the dynamic characteristics of various types of materialized views for PQs. The bitmap encoding methods and the strategies/algorithms to construct, search, and maintain the DMVI are suggested. The extensive experimental results demonstrate that our index technique is quite promising in improving the performance of the materialized view based query processing approach for PQs.

Study on the Methods of Enhancing the Quality of DIBR-based Multiview Intermediate Images using Depth Expansion and Mesh Construction (깊이 정보 확장과 메쉬 구성을 이용한 DIBR 기반 다시점 중간 영상 화질 향상 방법에 관한 연구)

  • Park, Kyoung Shin;Kim, Jiseong;Cho, Yongjoo
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.19 no.1
    • /
    • pp.127-135
    • /
    • 2015
  • In this research, we conducted an experiment on evaluating the extending depth information method and surface reconstruction method and the interaction of these two methods in order to enhance the final intermediate view images, which are acquired using DIBR (Depth-Image-Based Rendering) method. We evaluated the experimental control groups using the Microsoft's "Ballet" and "Break Dancer" data sets with three different hole-filling algorithms. The result revealed that the quality was improved the most by applying both extending depth information and surface reconstruction method as compared to the previous point clouds only. In addition, it found that the quality of the intermediate images was improved vastly by only applying extending depth information when using no hole-filling algorithm.

Analyzing Machine Learning Techniques for Fault Prediction Using Web Applications

  • Malhotra, Ruchika;Sharma, Anjali
    • Journal of Information Processing Systems
    • /
    • v.14 no.3
    • /
    • pp.751-770
    • /
    • 2018
  • Web applications are indispensable in the software industry and continuously evolve either meeting a newer criteria and/or including new functionalities. However, despite assuring quality via testing, what hinders a straightforward development is the presence of defects. Several factors contribute to defects and are often minimized at high expense in terms of man-hours. Thus, detection of fault proneness in early phases of software development is important. Therefore, a fault prediction model for identifying fault-prone classes in a web application is highly desired. In this work, we compare 14 machine learning techniques to analyse the relationship between object oriented metrics and fault prediction in web applications. The study is carried out using various releases of Apache Click and Apache Rave datasets. En-route to the predictive analysis, the input basis set for each release is first optimized using filter based correlation feature selection (CFS) method. It is found that the LCOM3, WMC, NPM and DAM metrics are the most significant predictors. The statistical analysis of these metrics also finds good conformity with the CFS evaluation and affirms the role of these metrics in the defect prediction of web applications. The overall predictive ability of different fault prediction models is first ranked using Friedman technique and then statistically compared using Nemenyi post-hoc analysis. The results not only upholds the predictive capability of machine learning models for faulty classes using web applications, but also finds that ensemble algorithms are most appropriate for defect prediction in Apache datasets. Further, we also derive a consensus between the metrics selected by the CFS technique and the statistical analysis of the datasets.