• Title/Summary/Keyword: Internet Classification

Search Result 1,070, Processing Time 0.032 seconds

Study on the Functional Classification of IM Application Traffic using Automata (오토마타를 이용한 메신저 트래픽의 기능별 분류에 관한 연구)

  • Lee, Sang-Woo;Park, Jun-Sang;Yoon, Sung-Ho;Kim, Myung-Sup
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.36 no.8B
    • /
    • pp.921-928
    • /
    • 2011
  • The increase of Internet users and services has caused the upsurge of data traffic over the network. Nowadays, variety of Internet applications has emerged which generates complicated and diverse data traffic. For the efficient management of Internet traffic, many traffic classification methods have been proposed. But most of the methods focused on the application-level classification, not the function-level classification or state changes of applications. The functional classification of application traffic makes possible the in-detail understanding of application behavior as well as the fine-grained control of applications traffic. In this paper we proposed automata based functional classification method of IM application traffic. We verified the feasibility of the proposed method with function-level control experiment of IM application traffic.

Prefix Cuttings for Packet Classification with Fast Updates

  • Han, Weitao;Yi, Peng;Tian, Le
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.8 no.4
    • /
    • pp.1442-1462
    • /
    • 2014
  • Packet classification is a key technology of the Internet for routers to classify the arriving packets into different flows according to the predefined rulesets. Previous packet classification algorithms have mainly focused on search speed and memory usage, while overlooking update performance. In this paper, we propose PreCuts, which can drastically improve the update speed. According to the characteristics of IP field, we implement three heuristics to build a 3-layer decision tree. In the first layer, we group the rules with the same highest byte of source and destination IP addresses. For the second layer, we cluster the rules which share the same IP prefix length. Finally, we use the heuristic of information entropy-based bit partition to choose some specific bits of IP prefix to split the ruleset into subsets. The heuristics of PreCuts will not introduce rule duplication and incremental update will not reduce the time and space performance. Using ClassBench, it is shown that compared with BRPS and EffiCuts, the proposed algorithm not only improves the time and space performance, but also greatly increases the update speed.

A study on the use of DDC scheme in directory search engine for research information resources on internet (인터넷 학술정보자원의 디렉토리 서비스 설계에 있어서 DDC 분류체계의 활용에 관한 연구)

  • 최재황
    • Journal of the Korean Society for information Management
    • /
    • v.15 no.2
    • /
    • pp.47-68
    • /
    • 1998
  • Although the research information resources on Internet are spread out on thousands of computers, it is not always easy to get them on the right time by the right manner. The purpose of this study is to use DDC(Dewey Decimal Classification) scheme in subject-based directory search engine for research information resourcees to aid retrieval on the Internet. For the design of classification code, this study followed 'systematic order' of DDC to arrange subjects from the general o the specific in a logical order, and for the design of classification dictionary, 'Relative Index' of DDC was used to bring together the various aspects of subjects.

  • PDF

Automatic Payload Signature Update System for the Classification of Dynamically Changing Internet Applications

  • Shim, Kyu-Seok;Goo, Young-Hoon;Lee, Dongcheul;Kim, Myung-Sup
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.3
    • /
    • pp.1284-1297
    • /
    • 2019
  • The network environment is presently becoming very increased. Accordingly, the study of traffic classification for network management is becoming difficult. Automatic signature extraction system is a hot topic in the field of traffic classification research. However, existing automatic payload signature generation systems suffer problems such as semi-automatic system, generating of disposable signatures, generating of false-positive signatures and signatures are not kept up to date. Therefore, we provide a fully automatic signature update system that automatically performs all the processes, such as traffic collection, signature generation, signature management and signature verification. The step of traffic collection automatically collects ground-truth traffic through the traffic measurement agent (TMA) and traffic management server (TMS). The step of signature management removes unnecessary signatures. The step of signature generation generates new signatures. Finally, the step of signature verification removes the false-positive signatures. The proposed system can solve the problems of existing systems. The result of this system to a campus network showed that, in the case of four applications, high recall values and low false-positive rates can be maintained.

MalDC: Malicious Software Detection and Classification using Machine Learning

  • Moon, Jaewoong;Kim, Subin;Park, Jangyong;Lee, Jieun;Kim, Kyungshin;Song, Jaeseung
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.5
    • /
    • pp.1466-1488
    • /
    • 2022
  • Recently, the importance and necessity of artificial intelligence (AI), especially machine learning, has been emphasized. In fact, studies are actively underway to solve complex and challenging problems through the use of AI systems, such as intelligent CCTVs, intelligent AI security systems, and AI surgical robots. Information security that involves analysis and response to security vulnerabilities of software is no exception to this and is recognized as one of the fields wherein significant results are expected when AI is applied. This is because the frequency of malware incidents is gradually increasing, and the available security technologies are limited with regard to the use of software security experts or source code analysis tools. We conducted a study on MalDC, a technique that converts malware into images using machine learning, MalDC showed good performance and was able to analyze and classify different types of malware. MalDC applies a preprocessing step to minimize the noise generated in the image conversion process and employs an image augmentation technique to reinforce the insufficient dataset, thus improving the accuracy of the malware classification. To verify the feasibility of our method, we tested the malware classification technique used by MalDC on a dataset provided by Microsoft and malware data collected by the Korea Internet & Security Agency (KISA). Consequently, an accuracy of 97% was achieved.

E2GSM: Energy Effective Gear-Shifting Mechanism in Cloud Storage System

  • You, Xindong;Han, GuangJie;Zhu, Chuan;Dong, Chi;Shen, Jian
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.10 no.10
    • /
    • pp.4681-4702
    • /
    • 2016
  • Recently, Massive energy consumption in Cloud Storage System has attracted great attention both in industry and research community. However, most of the solutions utilize single method to reduce the energy consumption only in one aspect. This paper proposed an energy effective gear-shifting mechanism (E2GSM) in Cloud Storage System to save energy consumption from multi-aspects. E2GSM is established on data classification mechanism and data replication management strategy. Data is classified according to its properties and then be placed into the corresponding zones through the data classification mechanism. Data replication management strategies determine the minimum replica number through a mathematical model and make decision on replica placement. Based on the above data classification mechanism and replica management strategies, the energy effective gear-shifting mechanism (E2GSM) can automatically gear-shifting among the nodes. Mathematical analytical model certificates our proposed E2GSM is energy effective. Simulation experiments based on Gridsim show that the proposed gear-shifting mechanism is cost effective. Compared to the other energy-saved mechanism, our E2GSM can save energy consumption substantially at the slight expense of performance loss while meeting the QoS of user.

A Case Study on Network Status Classification based on Latency Stability

  • Kim, JunSeong
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.8 no.11
    • /
    • pp.4016-4027
    • /
    • 2014
  • Understanding network latency is important for providing consistent and acceptable levels of services in network-based applications. However, due to the difficulty of estimating applications' network demands and the difficulty of network latency modeling the management of network resources has often been ignored. We expect that, since network latency repeats cycles of congested states, a systematic classification method for network status would be helpful to simplify issues in network resource managements. This paper presents a simple empirical method to classify network status with a real operational network. By observing oscillating behavior of end-to-end latency we determine networks' status in run time. Five typical network statuses are defined based on a long-term stability and a short-term burstiness. By investigating prediction accuracies of several simple numerical models we show the effectiveness of the network status classification. Experimental results show that around 80% reduction in prediction errors depending on network status.

Domain Adaptation Image Classification Based on Multi-sparse Representation

  • Zhang, Xu;Wang, Xiaofeng;Du, Yue;Qin, Xiaoyan
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.11 no.5
    • /
    • pp.2590-2606
    • /
    • 2017
  • Generally, research of classical image classification algorithms assume that training data and testing data are derived from the same domain with the same distribution. Unfortunately, in practical applications, this assumption is rarely met. Aiming at the problem, a domain adaption image classification approach based on multi-sparse representation is proposed in this paper. The existences of intermediate domains are hypothesized between the source and target domains. And each intermediate subspace is modeled through online dictionary learning with target data updating. On the one hand, the reconstruction error of the target data is guaranteed, on the other, the transition from the source domain to the target domain is as smooth as possible. An augmented feature representation produced by invariant sparse codes across the source, intermediate and target domain dictionaries is employed for across domain recognition. Experimental results verify the effectiveness of the proposed algorithm.

Spatial Analysis of the Internet Industry in Korea (인터넷 산업의 공간 분석에 관한 연구)

  • Lee, Hee Yeon
    • Journal of the Korean Geographical Society
    • /
    • v.38 no.6
    • /
    • pp.863-886
    • /
    • 2003
  • Internet is the most important element in the emergence of the Internet economy, which derives the creation of new firms and employment, and brings about the new ways of marketing and business. The emergence of the Internet economy and the rapid growth of the Internet industry have a great deal to changes in the spatial economy. Korea has experienced a rapid growth of the Internet industries, but few geographical studies have been done to explain the impact of the development of the Internet industries on the spatial economy. This research explored how Korea has developed as a nation of the strong Internet economy in terms of driving forces by demand and supply side. This research tried to build a data set for the Internet industries with the introduction of a new classification scheme and a measurement. The most important finding from this research was the spatial concentration of the Internet industries toward Seoul at the national level and toward the Gangnam area within Seoul. The rise of Internet industries has added attractiveness to Seoul which enjoys a kind of cumulative and circular advantages.

Object Classification based on Weakly Supervised E2LSH and Saliency map Weighting

  • Zhao, Yongwei;Li, Bicheng;Liu, Xin;Ke, Shengcai
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.10 no.1
    • /
    • pp.364-380
    • /
    • 2016
  • The most popular approach in object classification is based on the bag of visual-words model, which has several fundamental problems that restricting the performance of this method, such as low time efficiency, the synonym and polysemy of visual words, and the lack of spatial information between visual words. In view of this, an object classification based on weakly supervised E2LSH and saliency map weighting is proposed. Firstly, E2LSH (Exact Euclidean Locality Sensitive Hashing) is employed to generate a group of weakly randomized visual dictionary by clustering SIFT features of the training dataset, and the selecting process of hash functions is effectively supervised inspired by the random forest ideas to reduce the randomcity of E2LSH. Secondly, graph-based visual saliency (GBVS) algorithm is applied to detect the saliency map of different images and weight the visual words according to the saliency prior. Finally, saliency map weighted visual language model is carried out to accomplish object classification. Experimental results datasets of Pascal 2007 and Caltech-256 indicate that the distinguishability of objects is effectively improved and our method is superior to the state-of-the-art object classification methods.