• Title/Summary/Keyword: Huge Page

Search Result 28, Processing Time 0.023 seconds

A Packet Processing of Handling Large-capacity Traffic over 20Gbps Method Using Multi Core and Huge Page Memory Approache

  • Kwon, Young-Sun;Park, Byeong-Chan;Chang, Hoon
    • Journal of the Korea Society of Computer and Information
    • /
    • v.26 no.6
    • /
    • pp.73-80
    • /
    • 2021
  • In this paper, we propose a packet processing method capable of handling large-capacity traffic over 20Gbps using multi-core and huge page memory approaches. As ICT technology advances, the global average monthly traffic is expected to reach 396 exabytes by 2022. With the increase in network traffic, cyber threats are also increasing, increasing the importance of traffic analysis. Traffic analyzed as an existing high-cost foreign product simply stores statistical data and visually shows it. Network administrators introduce and analyze many traffic analysis systems to analyze traffic in various sections, but they cannot check the aggregated traffic of the entire network. In addition, since most of the existing equipment is of the 10Gbps class, it cannot handle the increasing traffic every year at a fast speed. In this paper, as a method of processing large-capacity traffic over 20Gbps, the process of processing raw packets without copying from single-core and basic SMA memory approaches to high-performance packet reception, packet detection, and statistics using multi-core and NUMA memory approaches suggest When using the proposed method, it was confirmed that more than 50% of the traffic was processed compared to the existing equipment.

Design of Semantic Search System for the Search of Duplicated Geospatial Projects (공간정보사업의 중복사업 검색을 위한 의미기반검색 시스템의 설계)

  • Park, Sangun;Lim, Jay Ick;Kang, Juyoung
    • Journal of Information Technology Services
    • /
    • v.12 no.3
    • /
    • pp.389-404
    • /
    • 2013
  • Geospatial information, which is one of social overhead capital, is predicted as a core growing industry for the future. The production of geospatial information requires a huge budget, so it is very important objective of the policy for geospatial information to prevent the duplication of geospatial projects. In this paper, we proposed a semantic search system which extracts possible duplication of geospatial projects by using ontology for geospatial project administration. In order to achieve our goal, we suggested how to construct and utilize geospatial project ontology, and designed the architecture and process of the semantic search. Moreover, we showed how the suggested semantic search works with a duplicated projects search scenario. The suggested system enables a nonprofessional can easily search for duplicated projects, therefore we expect that our research contributes to effective and efficient duplication review process for geospatial projects.

Neural Text Categorizer for Exclusive Text Categorization

  • Jo, Tae-Ho
    • Journal of Information Processing Systems
    • /
    • v.4 no.2
    • /
    • pp.77-86
    • /
    • 2008
  • This research proposes a new neural network for text categorization which uses alternative representations of documents to numerical vectors. Since the proposed neural network is intended originally only for text categorization, it is called NTC (Neural Text Categorizer) in this research. Numerical vectors representing documents for tasks of text mining have inherently two main problems: huge dimensionality and sparse distribution. Although many various feature selection methods are developed to address the first problem, the reduced dimension remains still large. If the dimension is reduced excessively by a feature selection method, robustness of text categorization is degraded. Even if SVM (Support Vector Machine) is tolerable to huge dimensionality, it is not so to the second problem. The goal of this research is to address the two problems at same time by proposing a new representation of documents and a new neural network using the representation for its input vector.

Application to 2-D Page-oriented Data Optical Cryptography Based on CFB Mode (CFB 모드에 기반한 2 차원 페이지 데이터의 광학적 암호화 응용)

  • Gil, Sang-Keun
    • Journal of IKEEE
    • /
    • v.19 no.3
    • /
    • pp.424-430
    • /
    • 2015
  • This paper proposes an optical cryptography application to 2-D page-oriented data based on CFB(Cipher Feedback) mode algorithm. The proposed method uses a free-space optical interconnected dual-encoding technique which performs XOR logic operations in order to implement 2-D page-oriented data encryption. The proposed method provides more enhanced cryptosystem with greater security strength than the conventional CFB block mode with 1-D encryption key due to the huge encryption key with 2-D arrayed page type. To verify the proposed method, encryption and decryption of 2-D page data and error analysis are carried out by computer simulations. The results show that the proposed CFB optical encryption system makes it possible to implement stronger cryptosystem with massive data processing and long encryption key compared to 1-D block method.

Design and Implementation of Dynamic Web Server Page Builder on Web (웹 기반의 동적 웹 서버 페이지 생성기 설계 및 구현)

  • Shin, Yong-Min;Kim, Byung-Ki
    • The KIPS Transactions:PartD
    • /
    • v.15D no.1
    • /
    • pp.147-154
    • /
    • 2008
  • Along with the trend of internet use, various web application developments have been performed to provide information that was managed in the internal database on the web by making a web server page. However, in most cases, a direct program was made without a systematic developmental methodology or with the application of a huge developmental methodology that is inappropriate and decreased the efficiency of the development. A web application that fails to follow a systematic developmental methodology and uses a script language can decrease the productivity of the program development, maintenance, and reuse. In this thesis, the auto writing tool for a dynamic web server page was designed and established by using a database for web application development based on a fast and effective script. It suggests a regularized script model and makes a standardized script for the data bound control tag creator by analyzing a dynamic web server page pattern with the database in order to contribute to productivity by being used in the web application development and maintenance.

The Use of Reinforcement Learning and The Reference Page Selection Method to improve Web Spidering Performance (웹 탐색 성능 향상을 위한 강화학습 이용과 기준 페이지 선택 기법)

  • 이기철;이선애
    • Journal of the Korea Computer Industry Society
    • /
    • v.3 no.3
    • /
    • pp.331-340
    • /
    • 2002
  • The web world is getting so huge and untractable that without an intelligent information extractor we would get more and more helpless. Conventional web spidering techniques for general purpose search engine may be too slow for the specific search engines, which concentrate only on specific areas or keywords. In this paper a new model for improving web spidering capabilities is suggested and experimented. How to select adequate reference web pages from the initial web Page set relevant to a given specific area (or keywords) can be very important to reduce the spidering speed. Our reference web page selection method DOPS dynamically and orthogonally selects web pages, and it can also decide the appropriate number of reference pages, using a newly defined measure. Even for a very specific area, this method worked comparably well almost at the level of experts. If we consider that experts cannot work on a huge initial page set, and they still have difficulty in deciding the optimal number of the reference web pages, this method seems to be very promising. We also applied reinforcement learning to web environment, and DOPS-based reinforcement learning experiments shows that our method works quite favorably in terms of both the number of hyper links and time.

  • PDF

The Development of Forest Fire Statistical Management System using Web GIS Technology

  • Jo, Myung-Hee;Kim, Joon-Bum;Kim, Hyun-Sik;Jo, Yun-Won
    • Proceedings of the KSRS Conference
    • /
    • 2002.10a
    • /
    • pp.183-190
    • /
    • 2002
  • In this paper forest fire statistical information management system is constructed on web environment using web based GIS(Geographic Information System) technology. Though this system, general users can easily access forest fire statistical information and obtain them in visual method such as maps, graphs, and text if they have web browsers. Moreover, officials related to forest fire can easily control and manage all information in domestic by accessing input interface, retrieval interface, and out interface. In order to implement this system, IIS 5.0 of Microsoft is used as web server and Oracle 8i and ASP(Active Server Page) are used for database construction and dynamic web page operation, respectively. Also, Arc IMS of ESRI is used to serve map data using Java and HTML as system development language. Through this system, general users can obtain the whole information related to forest fire visually in real time also recognize forest fire prevention. In addition, Forest officials can manage the domestic forest resource and control forest fire dangerous area efficiently and scientifically by analyzing and retrieving huge forest data through this system. So, they can save their manpower, time and cost to collect and manage data.

  • PDF

Web Page Recommendation using a Stochastic Process Model (Stochastic 프로세스 모델을 이용한 웹 페이지 추천 기법)

  • Noh, Soo-Ho;Park, Byung-Joon
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.42 no.6
    • /
    • pp.37-46
    • /
    • 2005
  • In the Web environment with a huge amount of information, Web page access patterns for the users visiting certain web site can be diverse and change continually in accordance with the change of its environment. Therefore it is almost impossible to develop and design web sites which fit perfectly for every web user's desire. Adaptive web site was proposed as solution to this problem. In this paper, we will present an effective method that uses a probabilistic model of DTMC(Discrete-Time Markov Chain) for learning user's access patterns and applying these patterns to construct an adaptive web site.

Improving Performance of Web Search using The User Preference in Query Word Senses (질의어 의미별 사용자 선호도를 이용한 웹 검색의 성능 향상)

  • 김형일;김준태
    • Journal of KIISE:Software and Applications
    • /
    • v.31 no.8
    • /
    • pp.1101-1112
    • /
    • 2004
  • In this paper, we propose a Web page weighting scheme using the user preference in each sense of query word to improve the performance of Web search. Generally search engines assign weights to a web page by using relevancy only, which is obtained by comparing the query word and the words in a web page. In the information retrieval from huge data such as the Web, simple word comparison cannot distinguish important documents because there exist too many documents with similar relevancy In this paper we implement a WordNet-based user interface that helps to distinguish different senses of query word, and constructed a search engine in which the implicit evaluations by multiple users are reflected in ranking by accumulating the number of clicks. In accumulating click counts, they are stored separately according to senses, so that more accurate search is possible. The experimental results with several keywords show that the precision of proposed system is improved compared to conventional search engines.

Diversity of Bacillus thuringiensis Strains Isolated from Citrus Orchards in Spain and Evaluation of Their Insecticidal Activity Against Ceratitis capitata

  • J.C., Vidal-Quist;Castanera, P.;Gonzalez-Cabrera, J.
    • Journal of Microbiology and Biotechnology
    • /
    • v.19 no.8
    • /
    • pp.749-759
    • /
    • 2009
  • A survey of Bacillus thuringiensis (Berliner) strains isolated from Spanish citrus orchards has been performed, and the strains were tested for insecticidal activity against the Mediterranean fruit fly Ceratitis capitata (Wiedemann), a key citrus pest in Spain. From a total of 150 environmental samples, 376 isolates were selected, recording a total B. thuringiensis index of 0.52. The collection was characterized by means of phase-contrast microscopy, SDS-PAGE, and PCR analysis with primer pairs detecting toxin genes cry1, cry2, cry3, cry4, cry5, cry7, cry8, cry9, cry10, cry11, cry12, cry14, cry17, cry19, cry21, cry27, cry39, cry44, cyt1, and cyt2. Diverse crystal inclusion morphologies were identified: bipyramidal (45%), round (40%), adhered to the spore (7%), small (5%), and irregular (3%). SDS-PAGE of spore-crystal preparations revealed 39 different electrophoresis patterns. All primer pairs used in PCR tests gave positive amplifications in strains of our collection, except for primers for detection of cry3, cry19, cry39, or cry44 genes. Strains containing cry1, cry2, cry4, and cry27 genes were the most abundant (48.7%, 46%, 11.2%, and 8.2% of the strains, respectively). Ten different genetic profiles were found, although a total of 109 strains did not amplify with the set of primers used. Screening for toxicity against C. capitata adults was performed using both spore-crystal and soluble fractions. Mortality levels were less than 30%. We have developed a large and diverse B. thuringiensis strain collection with huge potential to control several agricultural pests; however, further research is needed to find out Bt strains active against C. capitata.