• Title/Summary/Keyword: scalability issue

Search Result 81, Processing Time 0.023 seconds

Development Research of An Efficient Malware Classification System Using Hybrid Features And Machine Learning (하이브리드 특징 및 기계학습을 활용한 효율적인 악성코드 분류 시스템 개발 연구)

  • Yu, Jung-Been;Oh, Sang-Jin;Park, Leo-Hyun;Kwon, Tae-Kyoung
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.28 no.5
    • /
    • pp.1161-1167
    • /
    • 2018
  • In order to cope with dramatically increasing malware variant, malware classification research is getting diversified. Recent research tend to grasp individual limits of existing malware analysis technology (static/dynamic), and to change each method into "hybrid analysis", which is to mix different methods into one. Futhermore, it is applying machine learning to identify malware variant more accurately, which are difficult to classify. However, accuracy and scalability of trade-off problems that occur when using all kinds of methods are not yet to be solved, and it is still an important issue in the field of malware research. Therefore, to supplement and to solve the problems of the original malware classification research, we are focusing on developing a new malware classification system in this research.

An Efficient VM-Level Scaling Scheme in an IaaS Cloud Computing System: A Queueing Theory Approach

  • Lee, Doo Ho
    • International Journal of Contents
    • /
    • v.13 no.2
    • /
    • pp.29-34
    • /
    • 2017
  • Cloud computing is becoming an effective and efficient way of computing resources and computing service integration. Through centralized management of resources and services, cloud computing delivers hosted services over the internet, such that access to shared hardware, software, applications, information, and all resources is elastically provided to the consumer on-demand. The main enabling technology for cloud computing is virtualization. Virtualization software creates a temporarily simulated or extended version of computing and network resources. The objectives of virtualization are as follows: first, to fully utilize the shared resources by applying partitioning and time-sharing; second, to centralize resource management; third, to enhance cloud data center agility and provide the required scalability and elasticity for on-demand capabilities; fourth, to improve testing and running software diagnostics on different operating platforms; and fifth, to improve the portability of applications and workload migration capabilities. One of the key features of cloud computing is elasticity. It enables users to create and remove virtual computing resources dynamically according to the changing demand, but it is not easy to make a decision regarding the right amount of resources. Indeed, proper provisioning of the resources to applications is an important issue in IaaS cloud computing. Most web applications encounter large and fluctuating task requests. In predictable situations, the resources can be provisioned in advance through capacity planning techniques. But in case of unplanned and spike requests, it would be desirable to automatically scale the resources, called auto-scaling, which adjusts the resources allocated to applications based on its need at any given time. This would free the user from the burden of deciding how many resources are necessary each time. In this work, we propose an analytical and efficient VM-level scaling scheme by modeling each VM in a data center as an M/M/1 processor sharing queue. Our proposed VM-level scaling scheme is validated via a numerical experiment.

Data Volume based Trust Metric for Blockchain Networks (블록체인 망을 위한 데이터 볼륨 기반 신뢰 메트릭)

  • Jeon, Seung Hyun
    • Journal of Convergence for Information Technology
    • /
    • v.10 no.10
    • /
    • pp.65-70
    • /
    • 2020
  • With the appearance of Bitcoin that builds peer-to-peer networks for transaction of digital content and issuance of cryptocurrency, lots of blockchain networks have been developed to improve transaction performance. Recently, Joseph Lubin discussed Decentralization Transaction per Second (DTPS) against alleviating the value of biased TPS. However, this Lubin's trust model did not enough consider a security issue in scalability trilemma. Accordingly, we proposed a trust metric based on blockchain size, stale block rate, and average block size, using a sigmoid function and convex optimization. Via numerical analysis, we presented the optimal blockchain size of popular blockchain networks and then compared the proposed trust metric with the Lubin's trust model. Besides, Bitcoin based blockchain networks such as Litecoin were superior to Ethereum for trust satisfaction and data volume.

A Study of IP QoS(Quality of Service) Metric Sizing Based on the Connection and Transmission Quality (접속품질과 전송품질을 기반으로 한 IP QoS(Quality of Service) 측정 메트릭스 정립)

  • Noh, SiChoon;Kim, Jeom goo
    • Convergence Security Journal
    • /
    • v.15 no.2
    • /
    • pp.57-62
    • /
    • 2015
  • IP QoS is not required to overcome the limitations of the existing Best Effort Service to connect to the explosion of the Internet traffic revenue. To IP QoS requirements of next-generation communication network, Metric Sizing Methodology is very important. However, IP networks have been developed with a focus gender flexibility and scalability than the QoS. Therefore, it is necessary to secure the quality measures for different existing IP technology to apply QoS in IP networks. When establishing the connection quality and transmission quality, based on the IP QoS(Quality of Service) objective data quality metrics can be obtained by analyzing the communication quality hindrance. Understanding the communication quality level may evaluate quality sensitive area and quality hindrance. Establish effective quality metrics can be expected to promote effective and customer satisfaction through improved quality, improved call quality for this issue.

A Study of Quality-based Software Architecture Design Model under Web Application Development Environment (품질기반 웹 애플리케이션 개발을 위한 소프트웨어아키텍쳐 설계절차 예제 정립)

  • Moon, Song Chul;Noh, Si Choon
    • Convergence Security Journal
    • /
    • v.12 no.4
    • /
    • pp.115-122
    • /
    • 2012
  • As the most common application development of software development time, error-free quality, adaptability to frequent maintenance, such as the need for large and complex software challenges have been raised. When developing web applications to respond to software reusability, reliability, scalability, simplicity, these quality issues do not take into account such aspects traditionally. In this situation, the traditional development methodology to solve the same quality because it has limited development of new methodologies is needed. Quality of applications the application logic, data, and architecture in the entire area as a separate methodology can achieve your goals if you do not respond. In this study secure coding, the big issue, web application factors to deal with security vulnerabilities, web application architecture, design procedure is proposed. This proposal is based on a series of ISO/IEC9000, a web application architecture design process.

A Study of an Mobile Agent System Based on Hybrid P2P (변형 P2P 기반 시스템을 활용한 이동 에이전트 시스템에 관한 연구)

  • Lee, Seok-Hee;Yang, Il-Deung;Kim, Seong-Ryeol
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.17 no.5
    • /
    • pp.19-28
    • /
    • 2012
  • Recently a grid and cloud computing collaboration have become a social issue. These collaborative network system, the P2P system based on this system. Distingui shed from the client/server systems, P2P systems in order to exchange information, its purpose and functions are divided according to the morphological Category. In accordance with the purposes and functions of information and data retrieval, remote program control and integration services for the offers. Most P2P systems client/server scalability, and management takes the form, but to overcome the disadvantages in terms of applying the mixed-mode system is increasing. And recently the distributed computing aspects of the service to users in order to provide suitable to accommodate the diverse needs of various types of mobile agent technology is needed. In this paper, as required by the mobile agent access to a remote resource access control and agent for the execution and management capabilities and improve the reliability of the mobile agent system designed to suggest.

XML based Dynamic Search Agent for the Internet Efficient Multicast (인터넷 정보 검색을 위한 XML 기반 동적 탐색 에이전트 개발연구)

  • 이양원
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2003.10a
    • /
    • pp.598-601
    • /
    • 2003
  • As the amount of information on the WWW is increasing at a very high speed, the role of search engines has become more necessary and important. Yet, current popular search engines have a limitation of scalability because the more information those engines gather, the more garbage hit search results are produced, thus, XML based metadata has recently been the issue of much focus. Metadata is information about information, which using a small potion of data can express the fundamental information about other information. XML enables data to be expressed better organized and well structured. Moreover, the information in those current efficiency and make them disappointed by dead links and obsolete pages. this is because the information was collected and indexed in advance before users do searching. therefore, a mechanism is required that cam search from current and up-to-date information resources dynamically in real time to avoid retrieving out of data information. In this study, a new concept search agent system for the WWW by using XML based technology is proposed. the implementation of the prototype of this proposed system, the comparison with traditional search engines, and the evaluation of the prototype system are also discussed.

  • PDF

Effect of Processing Parameters on the Formation of Large Area Self-Assembled Monolayer of Polystyrene Beads by a Convective Self-Assembly Method (대류성 자기조립법을 통한 폴리스티렌 비드 대면적 단일층 형성에 미치는 공정 변수 효과)

  • Seo, Ahn-na;Choi, Ji-Hwan;Pyun, Jae-chul;Kim, Won Mok;Kim, Inho;Lee, Kyeong-Seok
    • Korean Journal of Materials Research
    • /
    • v.25 no.12
    • /
    • pp.647-654
    • /
    • 2015
  • Self-assembled monolayers(SAM) of microspheres such as silica and polystyrene(PS) beads have found widespread application in photonic crystals, sensors, and lithographic masks or templates. From a practical viewpoint, setting up a high-throughput process to form a SAM over large areas in a controllable manner is a key challenging issue. Various methods have been suggested including drop casting, spin coating, Langmuir Blodgett, and convective self-assembly(CSA) techniques. Among these, the CSA method has recently attracted attention due to its potential scalability to an automated high-throughput process. By controlling various parameters, this process can be precisely tuned to achieve well-ordered arrays of microspheres. In this study, using a restricted meniscus CSA method, we systematically investigate the effect of the processing parameters on the formation of large area self-assembled monolayers of PS beads. A way to provide hydrophilicity, a prerequisite for a CSA, to the surface of a hydrophobic photoresist layer, is presented in order to apply the SAM of the PS beads as a mask for photonic nanojet lithography.

The Design of Hardware MPI Units for MPSoC (MPSoC를 위한 저비용 하드웨어 MPI 유닛 설계)

  • Jeong, Ha-Young;Chung, Won-Young;Lee, Yong-Surk
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.36 no.1B
    • /
    • pp.86-92
    • /
    • 2011
  • In this paper, we propose a novel hardware MPI(Message Passing Interface) unit which supports message passing in multiprocessor system which use distributed memory architecture. MPI Hardware unit processes data synchronization, transmission and completion, and it supports processor non-blocking operation so it reduces overhead according to synchronization. Additionally, MPI hardware unit combines ready entry, request entry, reserve entry which save and manage the synchronized messages and performs the multiple outstanding issue and out of order completion. According to BFM(Bus Functional Model) simulation result, the performance is increased by 25% on many to many communication. After we designed MPI unit using HDL, with synopsys design compiler we synthesized, and for synthesis library we used MagnaChip $0.18{\mu}m$. And then we making prototype chip. The proposed message transmission interface hardware shows high performance for its increase in size. Thus, as we consider low-cost design and scalability, MPI hardware unit is useful in increasing overall performance of embedded MPSoC(Multi-Processor System-on-Chip).

An Index-Based Search Method for Performance Improvement of Set-Based Similar Sequence Matching (집합 유사 시퀀스 매칭의 성능 향상을 위한 인덱스 기반 검색 방법)

  • Lee, Juwon;Lim, Hyo-Sang
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.6 no.11
    • /
    • pp.507-520
    • /
    • 2017
  • The set-based similar sequence matching method measures similarity not for an individual data item but for a set grouping multiple data items. In the method, the similarity of two sets is represented as the size of intersection between them. However, there is a critical performances issue for the method in twofold: 1) calculating intersection size is a time consuming process, and 2) the number of set pairs that should be calculated the intersection size is quite large. In this paper, we propose an index-based search method for improving performance of set-based similar sequence matching in order to solve these performance issues. Our method consists of two parts. In the first part, we convert the set similarity problem into the intersection size comparison problem, and then, provide an index structure that accelerates the intersection size calculation. Second, we propose an efficient set-based similar sequence matching method which exploits the proposed index structure. Through experiments, we show that the proposed method reduces the execution time by 30 to 50 times then the existing methods. We also show that the proposed method has scalability since the performance gap becomes larger as the number of data sequences increases.