• Title/Summary/Keyword: define

Search Result 7,652, Processing Time 0.035 seconds

An Algorithm to Detect P2P Heavy Traffic based on Flow Transport Characteristics (플로우 전달 특성 기반의 P2P 헤비 트래픽 검출 알고리즘)

  • Choi, Byeong-Geol;Lee, Si-Young;Seo, Yeong-Il;Yu, Zhibin;Jun, Jae-Hyun;Kim, Sung-Ho
    • Journal of KIISE:Information Networking
    • /
    • v.37 no.5
    • /
    • pp.317-326
    • /
    • 2010
  • Nowadays, transmission bandwidth for network traffic is increasing and the type is varied such as peer-to-peer (PZP), real-time video, and so on, because distributed computing environment is spread and various network-based applications are developed. However, as PZP traffic occupies much volume among Internet backbone traffics, transmission bandwidth and quality of service(QoS) of other network applications such as web, ftp, and real-time video cannot be guaranteed. In previous research, the port-based technique which checks well-known port number and the Deep Packet Inspection(DPI) technique which checks the payload of packets were suggested for solving the problem of the P2P traffics, however there were difficulties to apply those methods to detection of P2P traffics because P2P applications are not used well-known port number and payload of packets may be encrypted. A proposed algorithm for identifying P2P heavy traffics based on flow transport parameters and behavioral characteristics can solve the problem of the port-based technique and the DPI technique. The focus of this paper is to identify P2P heavy traffic flows rather than all P2P traffics. P2P traffics are consist of two steps i)searching the opposite peer which have some contents ii) downloading the contents from one or more peers. We define P2P flow patterns on these P2P applications' features and then implement the system to classify P2P heavy traffics.

Revisting Clock Synchronization Problems : Static and Dynamic Constraint Transformations for Real Time Systems (시계 동기화 문제의 재 고찰 : 실시간 시스템을 위한 정적/동적 제약 변환 기법)

  • Yu, Min-Su;Park, Jeong-Geun;Hong, Seong-Su
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.26 no.10
    • /
    • pp.1264-1274
    • /
    • 1999
  • 본 논문에서는 분산된 클록들을 주기적으로 동기화 시키는 분산 실시간 시스템에서 시간적 제약을 만족시키기 위한 정적/동적 시간 제약(timing constraint) 변환 기법을 제안한다. 전형적인 이산클록동기화(discrete clock synchronization) 알고리즘은 클록의 값을 순간적으로 조정하여 클록의 시간이 불연속적으로 진행한다. 이러한 시간상의 불연속성은 시간적 이벤트를 잃어버리거나 다시 발생시키는 오류를 범하게 한다.클록 시간의 불연속성을 피하기 위해 일반적으로 연속클록동기화(continuous clock synchronization) 기법이 제안되고 있지만 소프트웨어적으로 구현되면 많은 오버헤드를 유발시키는 문제점이 있다. 본 논문에서는 시간적 제약을 동적으로 변환시키는 DCT (Dynamic Constraint Transformation) 기법을 제안하였으며, 이를 통해 기존의 이산클록동기화 알고리즘을 수정하지 않고서도 클록 시간의 불연속성에 의한 문제점들을 해결할 수 있도록 하였다. 아울러 DCT에 의해 이산클록동기화 하에서 생성된 태스크 스케쥴이 연속클록동기화에 의해 생성된 스케쥴과 동일함을 증명하여 DCT의 동작이 이론적으로 정확함을 증명하였다.또한 분산 실시간 시스템에서 지역 클록(local clock)이 기준 클록과 완벽하게 일치하지 않아서 발생하는 스케쥴링상의 문제점을 다루었다. 이를 위해 먼저 두 가지의 스케쥴링 가능성, 지역적 스케쥴링 가능성(local schedulability)과 전역적 스케쥴링 가능성(global schedulability)을 정의하고, 이를 위해 시간적 제약을 정적으로 변환시키는 SCT (Static Constraint Transformation) 기법을 제안하였다. SCT를 통해 지역적으로 스케쥴링 가능한 태스크는 전역적으로 스케쥴링이 가능하므로, 단지 지역적 스케쥴링 가능성만을 검사하면 스케쥴링 문제를 해결할 수 있도록 하였고 이를 수학적으로 증명하였다.Abstract In this paper, we present static and dynamic constraint transformation techniques for ensuring timing requirements in a distributed real-time system possessing periodically synchronized distributed local clocks. Traditional discrete clock synchronization algorithms that adjust local clocks instantaneously yield time discontinuities. Such time discontinuities lead to the loss or the gain of events, thus raising serious run-time faults.While continuous clock synchronization is generally suggested to avoid the time discontinuity problem, it incurs too much run-time overhead to be implemented in software. We propose a dynamic constraint transformation (DCT) technique which can solve the problem without modifying discrete clock synchronization algorithms. We formally prove the correctness of the DCT by showing that the DCT with discrete clock synchronization generates the same task schedule as the continuous clock synchronization.We also investigate schedulability problems that arise when imperfect local clocks are used in distributed real-time systems. We first define two notions of schedulability, global schedulability and local schedulability, and then present a static constraint transformation (SCT) technique. The SCT ensures that it is sufficient to check the schedulability of a task locally in a node with a local clock, since the global schedulability of the task is derived from its local schedulability through SCT. We formally prove the correctness of SCT.

An Algorithm for Computing Range-Groupby Queries (영역-그룹화 질의 계산 알고리즘)

  • Lee, Yeong-Gu;Mun, Yang-Se;Hwang, Gyu-Yeong
    • Journal of KIISE:Databases
    • /
    • v.29 no.4
    • /
    • pp.247-261
    • /
    • 2002
  • Aggregation is an important operation that affects the performance of OLAP systems. In this paper we define a new class of aggregation queries, called range-groupby queries, and present a method for processing them. A range-groupby query is defined as a query that, for an arbitrarily specified region of an n-dimensional cube, computes aggregations for each combination of values of the grouping attributes. Range-groupby queries are used very frequently in analyzing information in MOLAP since they allow us to summarize various trends in an arbitrarily specified subregion of the domain space. In MOLAP applications, in order to improve the performance of query processing, a method of maintaining precomputed aggregation results, called the prefix-sum array, is widely used. For the case of range-groupby queries, however, maintaining precomputed aggregation results for each combination of the grouping attributes incurs enormous storage overhead. Here, we propose a fast algorithm that can compute range-groupby queries with minimal storage overhead. Our algorithm maintains only one prefix-sum away and still effectively processes range-groupby queries for all possible combinations of the grouping attributes. Compared with the method that maintains a prefix-sum array for each combination of the grouping attributes in an n-dimensional cube, our algorithm reduces the space overhead by (equation omitted), while accessing a similar number of cells.

Extraction of Road Structure Elements for Developing IFC(Industry Foundation Classes) Model for Road (도로분야 IFC 확장을 위한 도로시설의 구성요소 도출)

  • Moon, Hyoun-Seok;Choi, Won-Sik;Kang, Leen-Seok;Nah, Hei-Sook
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.15 no.2
    • /
    • pp.1195-1203
    • /
    • 2014
  • Since IFC (Industry Foundation Classes) 4 is based on the representation of 3D elements for an architecture project, and does not define standardized shapes for civil projects such as roads, bridges, and tunnels etc, it has limitations in securing interoperability for exchanging a shape information model for the civil projects. Besides, since road facilities have a linear reference, which is modeled along the center alignment, it is difficult the designers to create a standardized 3D road model. The aim of this study is to configure structure elements and their attribute for a road in the perspective of 3D design for developing a shape information model for the road. To solve these issues, this study analyzes the design documents, which consist of a road design handbook, guide, specifications and standards, and then extract shape elements and their attributes of road structures. Such shape elements are defined as an entity item and we review a hierarchical structure of a road shape defined by a virtual road model. The detailed elements and their attributes can be utilized as a 3D shape information model for constructing BIM (Building Information Modeling) environment for Infrastructures. Besides, it is expected that the suggested items will be utilized as a base data for extending to IFC for a road subdividing the detailed shapes, types and attributes for road projects.

Planning Evacuation Routes with Load Balancing in Indoor Building Environments (실내 빌딩 환경에서 부하 균등을 고려한 대피경로 산출)

  • Jang, Minsoo;Lim, Kyungshik
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.5 no.7
    • /
    • pp.159-172
    • /
    • 2016
  • This paper presents a novel algorithm for searching evacuation paths in indoor disaster environments. The proposed method significantly improves the time complexity to find the paths to the evacuation exit by introducing a light-weight Disaster Evacuation Graph (DEG) for a building in terms of the size of the graph. With the DEG, the method also considers load balancing and bottleneck capacity of the paths to the evacuation exit simultaneously. The behavior of the algorithm consists of two phases: horizontal tiering (HT) and vertical tiering (VT). The HT phase finds a possible optimal path from anywhere of a specific floor to the evacuation stairs of the floor. Thus, after finishing the HT phases of all floors in parallel the VT phase begins to integrate all results from the previous HT phases to determine a evacuation path from anywhere of a floor to the safety zone of the building that could be the entrance or the roof of the building. It should be noted that the path produced by the algorithm. And, in order to define the range of graph to process, tiering scheme is used. In order to test the performance of the method, computing times and evacuation times are compared to the existing path searching algorithms. The result shows the proposed method is better than the existing algorithms in terms of the computing time and evacuation time. It is useful in a large-scale building to find the evacuation routes for evacuees quickly.

The Assessment Guideline of the Simplified Test Maturity Model (TMM) for An Assessor (심사원을 위한 경량화 테스트 성숙도 모델을 위한 평가 가이드 연구)

  • Jang, Woo Sung;Kim, Ki Du;Son, Hyun Seung;Park, Bo Kyung;Kim, R. Young Chul
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.6 no.8
    • /
    • pp.379-384
    • /
    • 2017
  • In real software business environment, there are required to validate software quality in diverse usage range of software for many small & medium companies. Software quality means both qualities of production and process. In our situation, we focus on better process quality of a test organization than a whole organization. But even the original test maturity model (TMM) does not enough to apply with our domestic venture/small & medium companies. To solve this problem, we suggest the simplified test maturity model for our companies. We redefine this simplified model with the original TMM and a test process improvement next (TPI next) model. The previous models just have provided each definition of maturity level, goal and activity per each level, which not exists an assessment guideline and a formal assessing procedure. Due to this reasons, an assessor is difficult to assess the test organization without them. this paper suggest an assessment guideline of the simplified TMM and also define the procedure which is included with activities and byproducts. With these assessment guideline, an assessor can work possible to formally assess test organizations of small & medium companies, and with self assessment guideline they can be correctly provision before assessment of their organizations.

Generating Training Dataset of Machine Learning Model for Context-Awareness in a Health Status Notification Service (사용자 건강 상태알림 서비스의 상황인지를 위한 기계학습 모델의 학습 데이터 생성 방법)

  • Mun, Jong Hyeok;Choi, Jong Sun;Choi, Jae Young
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.9 no.1
    • /
    • pp.25-32
    • /
    • 2020
  • In the context-aware system, rule-based AI technology has been used in the abstraction process for getting context information. However, the rules are complicated by the diversification of user requirements for the service and also data usage is increased. Therefore, there are some technical limitations to maintain rule-based models and to process unstructured data. To overcome these limitations, many studies have applied machine learning techniques to Context-aware systems. In order to utilize this machine learning-based model in the context-aware system, a management process of periodically injecting training data is required. In the previous study on the machine learning based context awareness system, a series of management processes such as the generation and provision of learning data for operating several machine learning models were considered, but the method was limited to the applied system. In this paper, we propose a training data generating method of a machine learning model to extend the machine learning based context-aware system. The proposed method define the training data generating model that can reflect the requirements of the machine learning models and generate the training data for each machine learning model. In the experiment, the training data generating model is defined based on the training data generating schema of the cardiac status analysis model for older in health status notification service, and the training data is generated by applying the model defined in the real environment of the software. In addition, it shows the process of comparing the accuracy by learning the training data generated in the machine learning model, and applied to verify the validity of the generated learning data.

A Prefetching Scheme for Location-Aware Mobile Information Services (위치인식 이동정보서비스를 위한 프리패칭 방법론)

  • Kim, Moon-Ja;Cha, Woo-Suk;Cho, In-Jun;Cho, Gi-Hwan
    • The KIPS Transactions:PartC
    • /
    • v.8C no.6
    • /
    • pp.831-838
    • /
    • 2001
  • Mobile information service aims to provide some degree of effective information for real life activities of mobile users. Due to the user mobility and actual realism, it becomes very important technical issue to support an adaptive information service methodology to current situations of the terminal and/or user. This paper deals with a prefetching scheme for location-aware, out of the various context-aware which can be considered in mobile information service. It makes use of the velocity-based mobility model to shape the terminal and/or user's mobility behavior. Based on the moving speed and direction, the prefetching zone is proposed to define the number of prefetched information, so as to limit effectively the prefetched information whilst to preserve the location-aware adaptability. Using a simulator, the proposed scheme has been evaluated in the effectiveness point of view. The idea in this paper is expected to be able to extended to the other mobile service contexts, such as service time, I/O types of mobile terminals, network bandwidth.

  • PDF

Analysis of Network Traffic with Urban Area Characteristics for Mobile Network Traffic Model (이동통신 네트워크 트래픽 모델을 위한 도시 지역 이동통신 트래픽 특성 분석)

  • Yoon, Young-Hyun
    • The KIPS Transactions:PartC
    • /
    • v.10C no.4
    • /
    • pp.471-478
    • /
    • 2003
  • Traditionally,, analysis, simulation and measurement have all been used to evaluate the performance of network protocols and functional entities that support mobile wireless service. Simulation methods are useful for testing the complex systems which have the very complicate interactions between components. To develop a mobile call simulator which is used to examine, validate, and predict the performance of mobile wireless call procedures must have the teletraffic model, which is to describe the mobile communication environments. Mobile teletraffic model is consists of 2 sub-models, traffic source and network traffic model. In this paper, we analyzed the network traffic data which are gathered from selected Base Stations (BSs) to define the mobile teletraffic model. We defined 4 types of cell location-Residential, Commercial, Industrial, and Afforest zone. We selected some Base Stations (BSs) which are represented cell location types in Seoul city, and gathered real data from them And then, we present the call rate per hour, cail distribution pattern per day, busy hours, loose hours, the maximum number of call, and the minimum number of calls based on defined cell location types. Those parameters are very important to test the mobile communication system´s performance and reliability and are very useful for defining the mobile network traffic model or for working the existed mobile simulation programs as input parameters.

Temporal Data Mining Framework (시간 데이타마이닝 프레임워크)

  • Lee, Jun-Uk;Lee, Yong-Jun;Ryu, Geun-Ho
    • The KIPS Transactions:PartD
    • /
    • v.9D no.3
    • /
    • pp.365-380
    • /
    • 2002
  • Temporal data mining, the incorporation of temporal semantics to existing data mining techniques, refers to a set of techniques for discovering implicit and useful temporal knowledge from large quantities of temporal data. Temporal knowledge, expressible in the form of rules, is knowledge with temporal semantics and relationships, such as cyclic pattern, calendric pattern, trends, etc. There are many examples of temporal data, including patient histories, purchaser histories, and web log that it can discover useful temporal knowledge from. Many studies on data mining have been pursued and some of them have involved issues of temporal data mining for discovering temporal knowledge from temporal data, such as sequential pattern, similar time sequence, cyclic and temporal association rules, etc. However, all of the works treated data in database at best as data series in chronological order and did not consider temporal semantics and temporal relationships containing data. In order to solve this problem, we propose a theoretical framework for temporal data mining. This paper surveys the work to date and explores the issues involved in temporal data mining. We then define a model for temporal data mining and suggest SQL-like mining language with ability to express the task of temporal mining and show architecture of temporal mining system.