• Title/Summary/Keyword: computing model

Search Result 3,371, Processing Time 0.034 seconds

Information Retrieval System based on Mobile Agents in Distributed and Heterogeneous Environment (분산 이형 환경에서의 이동에이전트를 이용한 정보 검색 시스템)

  • Park, Jae-Box;Lee, Kwang-young;Jo, Geun-Sik
    • Journal of KIISE:Software and Applications
    • /
    • v.29 no.1_2
    • /
    • pp.30-41
    • /
    • 2002
  • We focus on the mobile agents which are considered as new paradigm to solve information retrieval of large volumes of data in the distributed and heterogeneous environment. The mobile agent moves the computation to data instead of large volumes of data to computations. In this paper, we propose an information retrieval model, which can effectively search data in the distributed and heterogeneous environment, using mobile agents. Our model is applied to the design and implementation of an Q&A(Question and Answer) retrieval system. Our Q&A retrieval system, called QASSMA(Q&A Search System using Mobile Agents), uses mobile agents to retrieve articles from Q&A boards and newsgroups that exist in the heterogeneous and distributed environment. QASSMA has the following features and advantages. First, the mobile retrieval agent moves to the destination server to retrieve articles to reduce the retrieval time by eliminating data traffics from the server to the client host. Also it can reduce the traffic that was occurred in the centralized network system, and reduce the usage of resources by sending its agent and running in the destination host. Finally, the mobile retrieval agent of QASSMA can add and update dynamically the class file according to its retrieval environment, and support other retrieval manner. In this paper, we have shown that our Q&A retrieval system using mobile agents is more efficient than the retrieval system using static agents by our experiments.

Implementation and Performance Evaluation of Transaction Protocol for Wireless Internet Services (무선 인터넷 서비스를 위한 트랜잭션 프로토콜의 구현과 성능평가)

  • Choi, Yoon-Suk;Lim, Kyung-Shik
    • Journal of KIISE:Information Networking
    • /
    • v.29 no.4
    • /
    • pp.447-458
    • /
    • 2002
  • In this paper, we design and implement Wireless Transaction Protocol(WTP) and evaluate it for wireless transaction processing in mobile computing environments. The design and implementation of WTP are based on the coroutine model that might be suitable for light-weight portable devices. We test the compatibility between our product and the other products such as Nokia, Kannel and WinWAP For the evaluation of WTP, we use an Internet simulator that can arbitrary generate random wireless errors based on the Gilbert model. In our experiment, the performance of WTP is measured and compared to those of Transmission Control Protocol(TCP) and TCP for Transactions. The experiment shows that WTP outperforms the other two protocols for wireless transaction processing in terms of throughput and delay. Especially, WTP shows much higher performance In ease of high error rate and high probability of burst errors. This comes from the fact that WTP uses a small number of packets to process a transaction compared to the other two protocols and introduces a fixed time interval for retransmission instead of the exponential backoff algorithm. The experiment also shows that the WTP performance is optimized when the retransmission counter is set to 5 or 6 in case of high burst error rate.

Mobile M/VC Application Framework Using Observer/Observable Design Pattern (관찰자/피관찰자 설계 패턴을 이용한 모바일 M/VC 응용 프레임워크)

  • Eum Doo-Hun
    • Journal of Internet Computing and Services
    • /
    • v.7 no.2
    • /
    • pp.81-92
    • /
    • 2006
  • Recently, the number of mobile phone and PDA users has been rapidly increased. Such monitoring and control applications as geographical and traffic information systems are being used widely with wireless devices. In this paper, we introduce the mobile M/VC application framework that supports the rapid constructions of mobile monitoring and control (M/VC) applications. The mobile M/VC application framework uses the mobile Observer/Observable pattern that extends the Java's Observer/Observable for automatic interactions of server and client objects in wireless environments. It also provides the Multiplexer and Demultiplexer classes that supports the assembly feature of Observer and Observable objects. To construct an application using the framework, developers just need to create necessary objects from the Observable and MobileObserver classes and inter-connect them structurally(like the plug-and-play style) through the Multiplexer and Demultiplexer objects. Then, the state change of Observable objects is notified to the connected Observer objects and user's input with Observer objects is propagated to Observable objects. These mechanism is the main process for monitoring and control applications. Therefore, the mobile M/VC application framework can improve the productivity of mobile applications and enhance the reusability of such components as Observer and Observable objects in wireless environments.

  • PDF

The development of the procurement process system for e-Biz of the plant business (플랜트 산업의 e-Biz화를 위한 구매 Process System 개발)

  • Kim Hoi-Sub;Lee Joo-Pyo;Han Sang-hoon;Cho Se-hyoung;Park Chang-Hyun;Han Jae-Bum;Kim Sung-Ho;Kim Gyu-Tae
    • Journal of Internet Computing and Services
    • /
    • v.4 no.5
    • /
    • pp.11-19
    • /
    • 2003
  • Since B2C(Business to Customer) from which e-commerce had originated was replaced by B2B, e-Business has shown fast growth so fa., Recently, e-Procurement by l:n concept is on the development as self-purchase system associated with their own ERP In many conglomerates in the Korean market. However, in order to vitalize e-Biz in the plant industry, we need to set up e-marketplaces where many sellers and buyers can meet each other at the same time, which has become the essential part for success as an expanded business model. In this paper, we expect that the foundation for e-transformation in the plant industry is set up by developing Purchase Process System and related modules as the prerequisite for e-Biz in the plant industry, and this report will provide an exemplary model for e-commerce. The Purchase Process System consists of 1) e-Purchasing Module that manages bidding and contract information based on quotation inquiry, 2) e-Expediting Module that manages information to guarantee the on-time delivery, 3) e-Certification Module that controls user authentification, 4) e-Basic Module that manages the bulletin boards, Q&A, etc.

  • PDF

Knowledge based Genetic Algorithm for the Prediction of Peptides binding to HLA alleles common in Koreans (지식기반 유전자알고리즘을 이용한 한국인 빈발 HLA 대립유전자에 대한 결합 펩타이드 예측)

  • Cho, Yeon-Jin;Oh, Heung-Bum;Kim, Hyeon-Cheol
    • Journal of Internet Computing and Services
    • /
    • v.13 no.4
    • /
    • pp.45-52
    • /
    • 2012
  • T cells induce immune responses and thereby eliminate infected micro-organisms when peptides from the microbial proteins are bound to HLAs in the host cell surfaces, It is known that the more stable the binding of peptide to HLA is, the stronger the T cell response gets to remove more effectively the source of infection. Accordingly, if peptides (HLA binder) which can be bound stably to a certain HLA are found, those peptieds are utilized to the development of peptide vaccine to prevent infectious diseases or even to cancer. However, HLA is highly polymorphic so that HLA has a large number of alleles with some frequencies even in one population. Therefore, it is very inefficient to find the peptides stably bound to a number of HLAs by testing random possible peptides for all the various alleles frequent in the population. In order to solve this problem, computational methods have recently been developed to predict peptides which are stably bound to a certain HLA. These methods could markedly decrease the number of candidate peptides to be examined by biological experiments. Accordingly, this paper not only introduces a method of machine learning to predict peptides binding to an HLA, but also suggests a new prediction model so called 'knowledge-based genetic algorithm' that has never been tried for HLA binding peptide prediction. Although based on genetic algorithm (GA). it showed more enhanced performance than GA by incorporating expert knowledge in the process of the algorithm. Furthermore, it could extract rules predicting the binding peptide of the HLA alleles common in Koreans.

Performance Analysis of Siding Window based Stream High Utility Pattern Mining Methods (슬라이딩 윈도우 기반의 스트림 하이 유틸리티 패턴 마이닝 기법 성능분석)

  • Ryang, Heungmo;Yun, Unil
    • Journal of Internet Computing and Services
    • /
    • v.17 no.6
    • /
    • pp.53-59
    • /
    • 2016
  • Recently, huge stream data have been generated in real time from various applications such as wireless sensor networks, Internet of Things services, and social network services. For this reason, to develop an efficient method have become one of significant issues in order to discover useful information from such data by processing and analyzing them and employing the information for better decision making. Since stream data are generated continuously and rapidly, there is a need to deal with them through the minimum access. In addition, an appropriate method is required to analyze stream data in resource limited environments where fast processing with low power consumption is necessary. To address this issue, the sliding window model has been proposed and researched. Meanwhile, one of data mining techniques for finding meaningful information from huge data, pattern mining extracts such information in pattern forms. Frequency-based traditional pattern mining can process only binary databases and treats items in the databases with the same importance. As a result, frequent pattern mining has a disadvantage that cannot reflect characteristics of real databases although it has played an essential role in the data mining field. From this aspect, high utility pattern mining has suggested for discovering more meaningful information from non-binary databases with the consideration of the characteristics and relative importance of items. General high utility pattern mining methods for static databases, however, are not suitable for handling stream data. To address this issue, sliding window based high utility pattern mining has been proposed for finding significant information from stream data in resource limited environments by considering their characteristics and processing them efficiently. In this paper, we conduct various experiments with datasets for performance evaluation of sliding window based high utility pattern mining algorithms and analyze experimental results, through which we study their characteristics and direction of improvement.

A study on ecosystem model of the magazines for smart devices Focusing on the case of magazine business in foreign countries (스마트 디바이스 잡지 생태계 모델 연구 - 외국 잡지의 비즈니스 사례를 중심으로)

  • Chang, Yong Ho;Kong, Byoung-Hun
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.15 no.5
    • /
    • pp.2641-2654
    • /
    • 2014
  • In the smart media environment, magazine industry has been experiencing a transition to ecosystem of value network, which includes high complexity and ambiguity. Using case study method, this article conducts research on digital convergence, the model of magazine ecosystem and adaptation strategy of global magazine companies. Research findings have it that the way of contents production of global magazines has been based on collaborative production system within communities, expert communities, creative users, media contents companies and magazine platform. The system shows different patterns and characteristics depending on magazine-driven platform, Platform-driven platform or user-driven platform. Collaboration system has been confirmed in various cases: Huffington Post and Zinio which collaborate with media contents companies, Amazon magazines and Bookish with magazine companies, Huffington Post and Wired with expert communities, and Flipboard with creative users and communities. Foreign magazine contents diverge into (paper, electronic, app and web magazine) as they start the lively trades of their contents on the magazine platform. In the area of contents uses, readers employ smart media technology effectively such as cloud computing, artificial intelligence and module individualization, making it possible for the virtuous cycle to remain in the relationship within communities, expert communities and creative users.

Estimation of R factor using hourly rainfall data

  • Risal, Avay;Kum, Donghyuk;Han, Jeongho;Lee, Dongjun;Lim, Kyoungjae
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2016.05a
    • /
    • pp.260-260
    • /
    • 2016
  • Soil erosion is a very serious problem from agricultural as well as environmental point of view. Various computer models have been used to estimate soil erosion and assess erosion control practice. Universal Soil loss equation (USLE) is a popular model which has been used in many countries around the world. Erosivity (USLE R-factor) is one of the USLE input parameters to reflect impacts of rainfall in computing soil loss. Value of R factor depends upon Energy (E) and maximum rainfall intensity of specific period ($I30_{max}$) of that rainfall event and thus can be calculated using higher temporal resolution rainfall data such as 10 minute interval. But 10 minute interval rainfall data may not be available in every part of the world. In that case we can use hourly rainfall data to compute this R factor. Maximum 60 minute rainfall ($I60_{max}$) can be used instead of maximum 30 minute rainfall ($I30_{max}$) as suggested by USLE manual. But the value of Average annual R factor computed using hourly rainfall data needs some correction factor so that it can be used in USLE model. The objective of our study are to derive relation between averages annual R factor values using 10 minute interval and hourly rainfall data and to determine correction coefficient for R factor using hourly Rainfall data.75 weather stations of Korea were selected for our study. Ten minute interval rainfall data for these stations were obtained from Korea Meteorological Administration (KMA) and these data were changed to hourly rainfall data. R factor and $I60_{max}$ obtained from hourly rainfall data were compared with R factor and $I30_{max}$ obtained from 10 minute interval data. Linear relation between Average annual R factor obtained from 10 minute interval rainfall and from hourly data was derived with $R^2=0.69$. Correction coefficient was developed for the R factor calculated using hourly rainfall data.. Similarly, the relation was obtained between event wise $I30_{max}$ and $I60_{max}$ with higher $R^2$ value of 0.91. Thus $I30_{max}$ can be estimated from I60max with higher accuracy and thus the hourly rainfall data can be used to determine R factor more precisely by multiplying Energy of each rainfall event with this corrected $I60_{max}$.

  • PDF

An Interpretable Log Anomaly System Using Bayesian Probability and Closed Sequence Pattern Mining (베이지안 확률 및 폐쇄 순차패턴 마이닝 방식을 이용한 설명가능한 로그 이상탐지 시스템)

  • Yun, Jiyoung;Shin, Gun-Yoon;Kim, Dong-Wook;Kim, Sang-Soo;Han, Myung-Mook
    • Journal of Internet Computing and Services
    • /
    • v.22 no.2
    • /
    • pp.77-87
    • /
    • 2021
  • With the development of the Internet and personal computers, various and complex attacks begin to emerge. As the attacks become more complex, signature-based detection become difficult. It leads to the research on behavior-based log anomaly detection. Recent work utilizes deep learning to learn the order and it shows good performance. Despite its good performance, it does not provide any explanation for prediction. The lack of explanation can occur difficulty of finding contamination of data or the vulnerability of the model itself. As a result, the users lose their reliability of the model. To address this problem, this work proposes an explainable log anomaly detection system. In this study, log parsing is the first to proceed. Afterward, sequential rules are extracted by Bayesian posterior probability. As a result, the "If condition then results, post-probability" type rule set is extracted. If the sample is matched to the ruleset, it is normal, otherwise, it is an anomaly. We utilize HDFS datasets for the experiment, resulting in F1score 92.7% in test dataset.

Cellular Automata Simulation System for Emergency Response to the Dispersion of Accidental Chemical Releases (사고로 인한 유해화학물질 누출확산의 대응을 위한 Cellular Automata기반의 시뮬레이션 시스템)

  • Shin, Insup Paul;Kim, Chang Won;Kwak, Dongho;Yoon, En Sup;Kim, Tae-Ok
    • Journal of the Korean Institute of Gas
    • /
    • v.22 no.6
    • /
    • pp.136-143
    • /
    • 2018
  • Cellular automata have been applied to simulations in many fields such as astrophysics, social phenomena, fire spread, and evacuation. Using cellular automata, this study develops a model for consequence analysis of the dispersion of hazardous chemicals, which is required for risk assessments of and emergency responses for frequent chemical accidents. Unlike in cases of detailed plant safety design, real-time accident responses require fast and iterative calculations to reduce the uncertainty of the distribution of damage within the affected area. EPA ALOHA and KORA of National Institute of Chemical Safety have been popular choices for these analyses. However, this study proposes an initiative to supplement the model and code continuously and is different in its development of free software, specialized for small and medium enterprises. Compared to the full-scale computational fluid dynamics (CFD), which requires large amounts of computation time, the relative accuracy loss is compromised, and the convenience of the general user is improved. Using Python open-source libraries as well as meteorological information linkage, it is made possible to expand and update the functions continuously. Users can easily obtain the results by simply inputting the layout of the plant and the materials used. Accuracy is verified against full-scale CFD simulations, and it will be distributed as open source software, supporting GPU-accelerated computing for fast computation.