• Title/Summary/Keyword: High scalability

Search Result 434, Processing Time 0.024 seconds

MPLS and Video Stream broadcast multicast transport optimization through convergence (MPLS와 멀티캐스트 융합을 통한 Video Stream 방송 전송 최적화)

  • Hwang, Seong-Kyu;Han, Seung-Jo
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.18 no.6
    • /
    • pp.1330-1336
    • /
    • 2014
  • QoS techniques and transmitted in real-time communication with the advancement of technology a variety of applications and services are available these days, mobile devices bogeuphwa LTE technology to the development of multimedia services with high quality can be realized. In order to satisfy this condition simply with a router with an increased bandwidth expansion by considering the increase in the routing table of the network scalability problems included. Burst traffic data to be distributed according to the environment is to be centered. To do this, the destination -based routing method to transmit the current paper -based (Source routing) routing settings are required. In this paper, published by the IETF, IP switching system based on standardized protocol Label Switching Multi-Protocol Label Switching (MPLS) network by using the existing Best Effect is difficult to guarantee QoS for multimedia transmission in MPLS network environment using optimized QoS guarantees to transmit the multicast.

Adaptive Random Testing through Iterative Partitioning with Enlarged Input Domain (입력 도메인 확장을 이용한 반복 분할 기반의 적응적 랜덤 테스팅 기법)

  • Shin, Seung-Hun;Park, Seung-Kyu
    • The KIPS Transactions:PartD
    • /
    • v.15D no.4
    • /
    • pp.531-540
    • /
    • 2008
  • An Adaptive Random Testing(ART) is one of test case generation algorithms, which was designed to get better performance in terms of fault-detection capability than that of Random Testing(RT) algorithm by locating test cases in evenly spreaded area. Two ART algorithms, such as Distance-based ART(D-ART) and Restricted Random Testing(RRT), had been indicated that they have significant drawbacks in computations, i.e., consuming quadratic order of runtime. To reduce the amount of computations of D-ART and RRT, iterative partitioning of input domain strategy was proposed. They achieved, to some extent, the moderate computation cost with relatively high performance of fault detection. Those algorithms, however, have yet the patterns of non-uniform distribution in test cases, which obstructs the scalability. In this paper we analyze the distribution of test cases in an iterative partitioning strategy, and propose a new method of input domain enlargement which makes the test cases get much evenly distributed. The simulation results show that the proposed one has about 3 percent of improvement in terms of mean relative F-measure for 2-dimension input domain, and shows 10 percent improvement for 3-dimension space.

A Study of Core-Stateless Mechanism for Fair Bandwidth Allocation (대역 공평성 보장을 위한 Core-Stateless 기법 연구)

  • Kim, Hwa-Suk;Kim, Sang-Ha;Kim, Young-Bu
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.28 no.4C
    • /
    • pp.343-355
    • /
    • 2003
  • Fair bandwidth allocations at routers protect adaptive flows from non-adaptive ones and may simplify end-to end congestion control. However, traditional fair bandwidth allocation mechanisms, like Weighted Fair Queueing and Flow Random Early Drop, maintain state, manage buffera and perform packet scheduling on a per-flow basis. These mechanisms are more complex and less scalable than simple FIFO queueing when they are used in the interi or of a high-speed network. Recently, to overcome the implementation complexity problem and address the scalability and robustness, several fair bandwidth allocation mechanisms without per-flow state in the interior routers are proposed. Core-Stateless Fair Queueing and Rainbow Fair Queuing are approximates fair queueing in the core-stateless networks. In this paper, we proposed simple Layered Fair Queueing (SLFQ), another core-stateless mechanism to approximate fair bandwidth allocation without per-flow state. SLFQ use simple layered scheme for packet labeling and has simpler packet dropping algorithm than other core-stateless fair bandwidth allocation mechanisms. We presente simulations and evaluated the performance of SLFQ in comparison to other schemes. We also discussed other are as to which SLFQ is applicable.

Scalable Dual-Field Montgomery Multiplier Using Multi-Precision Carry Save Adder (다정도 CSA를 이용한 Dual-Field상의 확장성 있는 Montgomery 곱셈기)

  • Kim, Tae-Ho;Hong, Chun-Pyo;Kim, Chang-Hoon
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.33 no.1C
    • /
    • pp.131-139
    • /
    • 2008
  • This paper presents a scalable dual-field Montgomery multiplier based on a new multi-precision carry save adder(MP-CSA), which operates in both types of finite fields GF(p) and GF($2^m$). The new MP-CSA consists of two carry save adders(CSA). Each CSA is composed of n = [w/b] carry propagation adders(CPA) for a modular multiplication with w-bit words, where b is the number of dual field adders(DFA) in a CPA. The proposed Montgomery multiplier has roughly the same timing complexity compared with the previous result, however, it has the advantage of reduced chip area requirements. In addition, the proposed circuit produces the exact modular multiplication result at the end of operation unlike the previous architecture. Furthermore, the proposed Montgomery multiplier has a high scalability in terms of w and m. Therefore, it can be used to multiplier over GF(p) and GF($2^m$) for cryptographic applications.

Detection of NoSQL Injection Attack in Non-Relational Database Using Convolutional Neural Network and Recurrent Neural Network (비관계형 데이터베이스 환경에서 CNN과 RNN을 활용한 NoSQL 삽입 공격 탐지 모델)

  • Seo, Jeong-eun;Moon, Jong-sub
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.30 no.3
    • /
    • pp.455-464
    • /
    • 2020
  • With a variety of data types and high utilization of data, non-relational databases are a popular data storage because it supports better availability and scalability. The increasing use of this technology also brings the risk of NoSQL injection attacks. Existing works mostly discuss the rule-based detection of NoSQL injection attacks that it is hard to deal with NoSQL queries beyond the coverage of the rules. In this paper, we propose a model for detecting NoSQL injection attacks. Our model is based on deep learning algorithms that select features from NoSQL queries using CNN, and classify NoSQL queries using RNN. Also, we experiment the proposed model to compare with existing models, and find that our model outperforms traditional models in terms of detection rate.

Content-based Extended CAN to Support Keyword Search (키워드 검색 지원을 위한 컨텐츠 기반의 확장 CAN)

  • Park, Jung-Soo;Lee, Hyuk-ro;U, Uk-dong;Jo, In-june
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • v.9 no.2
    • /
    • pp.103-109
    • /
    • 2005
  • Research about P2P system have recently a lot of attention in connection of form that pass early Centralized P2P and is Decentralized P2P. Specially, Structured P2P System of DHT base have a attention to scalability and systematic search and high search efficiency by routing. But, Structured P2P System of DHT base have problem, file can be located only their unique File IDs that although user may wish to search for files using a set descriptive keyword or do not have the exact File ID of the files. This paper propose extended-CAN mechanism that creates File ID of Contents base and use KID and CKD for commonness keyword processing to support keyword search in P2P System of DHT base.

  • PDF

An adaptive load balancing method for RFID middlewares based on the Standard Architecture (RFID 미들웨어 표준 아키텍처에 기반한 적응적 부하 분산 방법)

  • Park, Jae-Geol;Chae, Heung-Seok
    • The KIPS Transactions:PartD
    • /
    • v.15D no.1
    • /
    • pp.73-86
    • /
    • 2008
  • Because of its capability of automatic identification of objects, RFID(Radio Frequency Identification) technologies have extended their application areas to logistics, healthcare, and food management system. Load balancing is a basic technique for improving scalability of systems by moving loads of overloaded middlewares to under loaded ones. Adaptive load balancing has been known to be effective for distributed systems of a large load variance under unpredictable situations. There are needs for applying load balancing to RFID middlewares because they must efficiently treat vast numbers of RFID tags which are collected from multiple RFID readers. Because there can be a large amount of variance in loads of RFID middlewares which are difficult to predict, it is desirable to consider adaptive load balancing approach for RFID middlewares, which can dynamically choose a proper load balancing strategy depending on the current load. This paper proposes an adaptive load balancing approach for RFID middlewares and presents its design and implementation. First we decide a performance model by a experiment with a real RFID middleware. Then, a set of proper load balancing strategies for high/medium/low system loads is determined from a simulation of various load balancing strategies based on the performance model.

An Efficient Implementation of Mobile Raspberry Pi Hadoop Clusters for Robust and Augmented Computing Performance

  • Srinivasan, Kathiravan;Chang, Chuan-Yu;Huang, Chao-Hsi;Chang, Min-Hao;Sharma, Anant;Ankur, Avinash
    • Journal of Information Processing Systems
    • /
    • v.14 no.4
    • /
    • pp.989-1009
    • /
    • 2018
  • Rapid advances in science and technology with exponential development of smart mobile devices, workstations, supercomputers, smart gadgets and network servers has been witnessed over the past few years. The sudden increase in the Internet population and manifold growth in internet speeds has occasioned the generation of an enormous amount of data, now termed 'big data'. Given this scenario, storage of data on local servers or a personal computer is an issue, which can be resolved by utilizing cloud computing. At present, there are several cloud computing service providers available to resolve the big data issues. This paper establishes a framework that builds Hadoop clusters on the new single-board computer (SBC) Mobile Raspberry Pi. Moreover, these clusters offer facilities for storage as well as computing. Besides the fact that the regular data centers require large amounts of energy for operation, they also need cooling equipment and occupy prime real estate. However, this energy consumption scenario and the physical space constraints can be solved by employing a Mobile Raspberry Pi with Hadoop clusters that provides a cost-effective, low-power, high-speed solution along with micro-data center support for big data. Hadoop provides the required modules for the distributed processing of big data by deploying map-reduce programming approaches. In this work, the performance of SBC clusters and a single computer were compared. It can be observed from the experimental data that the SBC clusters exemplify superior performance to a single computer, by around 20%. Furthermore, the cluster processing speed for large volumes of data can be enhanced by escalating the number of SBC nodes. Data storage is accomplished by using a Hadoop Distributed File System (HDFS), which offers more flexibility and greater scalability than a single computer system.

Transformation from Legacy Application Class to JavaBeans for Component Based Development (컴포넌트 기반 개발을 위한 기존 애플리케이션 클래스의 JavaBean으로의 변환)

  • Kim, Byeong-Jun;Kim, Ji-Yeong;Kim, Haeng-Gon
    • The KIPS Transactions:PartD
    • /
    • v.9D no.4
    • /
    • pp.619-628
    • /
    • 2002
  • Reusable software component is an ultimate goal for the software development. Component based development is focused on advanced concepts rather than passive manipulation or class library with source codes. However, the primary component construction in component based development lead to an additional development cost and effort for reconstructing the new software component within a component model. Java application provides several features based on component model. But, we only have an opportunity to develop the smallest reuse units or the restricted set of GUI components. It cannot contributed as a component and only used in the specific domain component with high cost and efforts. In this paper, we apply java component model to the existing java application and extract javabeans through extending the component scalability. We also discuss the algorithm for transformation mechanism from legacy class to javabeans with a partial of business logic.

An Extended SNMP Scheme for a Digital Convergence Device with Control Functions (제어 기능을 갖는 디지털 컨버전스 장비를 위한 SNMP 확장에 관한 연구)

  • Heo, Gil;Kim, Eun-Hoe;Choi, Jae-Young
    • The KIPS Transactions:PartA
    • /
    • v.16A no.5
    • /
    • pp.369-380
    • /
    • 2009
  • SNMP (Simple Network Management Protocol) is a standard protocol for management of network devices, and it provides excellent features such as scalability, information management, authentication, encryption, and access control. However, SNMP has a structural weakness to fully support control functions for integrated management of digital convergence devices, and it has a limitation of message length in SNMP communication. In this paper, we present an extended SNMP scheme for integrated management of digital convergence devices with control functions. We modified the SNMP architecture by adding DDS (Device Driver Subsystem) to SNMP engine for controlling different devices and by defining CADM (Control Agent Driver Model), therefore we solved the ambiguity problem between 'set' and 'control' of SNMP. And the extended SNMP made it easy for SNMP applications to use various control functions. The extended SNMP can transport massive high-level information by adding three new SNMP commands which eliminate the limit of message length.