• Title/Summary/Keyword: distributed applications

Search Result 1,258, Processing Time 0.032 seconds

The Integration of heterogeneous applications through Plug-and Play (플러그 앤드 플래이(Plug-and-Play)개념을 이용한 이형 응용 프로그램의 통합 기법)

  • Baek, Sun-Cheol;Choe, Jung-Min;Jang, Myeong-Uk;Park, Sang-Gyu;Min, Byeong-Ik;Im, Yeong-Hwan
    • The Transactions of the Korea Information Processing Society
    • /
    • v.2 no.6
    • /
    • pp.947-959
    • /
    • 1995
  • In this paper, we discuss an effort to develop a multi-agent architecture through which heterogeneous applications communicate and cooperate by means of plug-and play mechanism. Three componets are created in order to challenge the plug-and-play mechanism : meta-information, PnP agent module, and ICM. The meta- information is used to automatically set up a suitable configuration for a new plugged application, eliminating the need for direct addressing among heterogeneous applications. The PnPagent module is a homogeneous controller that operates on an application to ensure that its activities are coordin ated with those of the others within the community, provides a homogeneous communication envelope for all heterogeneous applications. The combination of these three components is used to meet the desire for implementing the plug-and-play mechanism. In this distributed, open architecture, one should be able to simply plug in a new application and it should work.

  • PDF

Meta-server Model for Middleware Supporting for Context Awareness (상황인식을 지원하는 미들웨어를 위한 메타서버 모델)

  • Lee, Seo-Jeong;Hwang, Byung-Yeon;Yoon, Yong-Ik
    • Journal of Korea Spatial Information System Society
    • /
    • v.6 no.2 s.12
    • /
    • pp.39-49
    • /
    • 2004
  • An increasing number of distributed applications will be achieved with mobile technology. These applications face temporary loss of network connectivity when they move. They need to discover other hosts in an ad-hoc manner, and they are likely to have scarce resources including CPU speed, memory and battery power. Software engineers building mobile applications need to use a suitable middleware that resolves these problems and offers appropriate support for developing mobile applications. In this paper, we describe the meta-server building for middleware that addresses reflective context awareness and present usability with demonstration. Metadata is consist of user configuration, device configuration, user context, device context and dynamic image metadata. When middleware send a saving or retrieval request to meta-server, it returns messages to middleware after the verification of the request. This meta-server has the application for multimedia stream services with context awareness.

  • PDF

HTCaaS(High Throughput Computing as a Service) in Supercomputing Environment (슈퍼컴퓨팅환경에서의 대규모 계산 작업 처리 기술 연구)

  • Kim, Seok-Kyoo;Kim, Jik-Soo;Kim, Sangwan;Rho, Seungwoo;Kim, Seoyoung;Hwang, Soonwook
    • The Journal of the Korea Contents Association
    • /
    • v.14 no.5
    • /
    • pp.8-17
    • /
    • 2014
  • Petascale systems(so called supercomputers) have been mainly used for supporting communication-intensive and tightly-coupled parallel computations based on message passing interfaces such as MPI(HPC: High-Performance Computing). On the other hand, computing paradigms such as High-Throughput Computing(HTC) mainly target compute-intensive (relatively low I/O requirements) applications consisting of many loosely-coupled tasks(there is no communication needed between them). In Korea, recently emerging applications from various scientific fields such as pharmaceutical domain, high-energy physics, and nuclear physics require a very large amount of computing power that cannot be supported by a single type of computing resources. In this paper, we present our HTCaaS(High-Throughput Computing as a Service) which can leverage national distributed computing resources in Korea to support these challenging HTC applications and describe the details of our system architecture, job execution scenario and case studies of various scientific applications.

Design of a Large-scale Task Dispatching & Processing System based on Hadoop (하둡 기반 대규모 작업 배치 및 처리 기술 설계)

  • Kim, Jik-Soo;Cao, Nguyen;Kim, Seoyoung;Hwang, Soonwook
    • Journal of KIISE
    • /
    • v.43 no.6
    • /
    • pp.613-620
    • /
    • 2016
  • This paper presents a MOHA(Many-Task Computing on Hadoop) framework which aims to effectively apply the Many-Task Computing(MTC) technologies originally developed for high-performance processing of many tasks, to the existing Big Data processing platform Hadoop. We present basic concepts, motivation, preliminary results of PoC based on distributed message queue, and future research directions of MOHA. MTC applications may have relatively low I/O requirements per task. However, a very large number of tasks should be efficiently processed with potentially heavy inter-communications based on files. Therefore, MTC applications can show another pattern of data-intensive workloads compared to existing Hadoop applications, typically based on relatively large data block sizes. Through an effective convergence of MTC and Big Data technologies, we can introduce a new MOHA framework which can support the large-scale scientific applications along with the Hadoop ecosystem, which is evolving into a multi-application platform.

Framework for Supporting Business Services based on the EPC Network (EPC Network 기반의 비즈니스 서비스 지원을 위한 프레임워크)

  • Nam, Tae-Woo;Yeom, Keun-Hyuk
    • The KIPS Transactions:PartD
    • /
    • v.17D no.3
    • /
    • pp.193-202
    • /
    • 2010
  • Recently, there have been several researches on automatic object identification and distributed computing technology to realize a ubiquitous computing environment. Radio Frequency IDentification (RFID) technology has been applied to many business areas to simplify complex processes and gain important benefits. To derive real benefits from RFID, the system must rapidly implement functions to process a large quantity of event data generated by the RFID operations and should be configured dynamically for changing businesses. Consequently, developers are forced to implement systems to derive meaningful high-level events from simple RFID events and bind them to various business processes. Although applications could directly consume and act on RFID events, extracting the business rules from the business logic leads to better decoupling of the system, which consequentially increases maintainability. In this paper, we describe an RFID business aware framework for business processes in the Electronic Product Code (EPC) Network. This framework is proposed for developing business applications using business services. The term "business services" refers to generated events that can be used in business applications without additional data collection and processing. The framework provides business rules related to data collection, processing, and management, and supports the rapid development and easy maintenance of business applications based on business services.

LDBAS: Location-aware Data Block Allocation Strategy for HDFS-based Applications in the Cloud

  • Xu, Hua;Liu, Weiqing;Shu, Guansheng;Li, Jing
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.1
    • /
    • pp.204-226
    • /
    • 2018
  • Big data processing applications have been migrated into cloud gradually, due to the advantages of cloud computing. Hadoop Distributed File System (HDFS) is one of the fundamental support systems for big data processing on MapReduce-like frameworks, such as Hadoop and Spark. Since HDFS is not aware of the co-location of virtual machines in the cloud, the default scheme of block allocation in HDFS does not fit well in the cloud environments behaving in two aspects: data reliability loss and performance degradation. In this paper, we present a novel location-aware data block allocation strategy (LDBAS). LDBAS jointly optimizes data reliability and performance for upper-layer applications by allocating data blocks according to the locations and different processing capacities of virtual nodes in the cloud. We apply LDBAS to two stages of data allocation of HDFS in the cloud (the initial data allocation and data recovery), and design the corresponding algorithms. Finally, we implement LDBAS into an actual Hadoop cluster and evaluate the performance with the benchmark suite BigDataBench. The experimental results show that LDBAS can guarantee the designed data reliability while reducing the job execution time of the I/O-intensive applications in Hadoop by 8.9% on average and up to 11.2% compared with the original Hadoop in the cloud.

The Study of Web Services Adaptation in Ubiquitous Environments (유비쿼터스 환경에서의 웹서비스 적용 기술)

  • Lee, Won-Suk;Lee, Kang-Chan;Jeon, Jong-Hong;Lee, Seung-Yun
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • v.9 no.2
    • /
    • pp.1031-1034
    • /
    • 2005
  • The beginning of Web Services was aware of efficient technology for application integration, thus it was used to integrating the distributed enterprise applications or the applications between business partners. But, recently the usage of Web Services is spreaded out not only internet applications but also wireless network applications. The main reasons are that Web Services is the international standard of W3C, and Web Services is based on XML that has the neutralized characteristics. Currently Major company of Web Services such as MS, IBM, etc. focus on the inte In this paper, we define and explain technical issues for adapting web services to ubiquitous Environment.

  • PDF

Workflow-based Bio Data Analysis System for HPC (HPC 환경을 위한 워크플로우 기반의 바이오 데이터 분석 시스템)

  • Ahn, Shinyoung;Kim, ByoungSeob;Choi, Hyun-Hwa;Jeon, Seunghyub;Bae, Seungjo;Choi, Wan
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.2 no.2
    • /
    • pp.97-106
    • /
    • 2013
  • Since human genome project finished, the cost for human genome analysis has decreased very rapidly. This results in the sharp increase of human genome data to be analyzed. As the need for fast analysis of very large bio data such as human genome increases, non IT researchers such as biologists should be able to execute fast and effectively many kinds of bio applications, which have a variety of characteristics, under HPC environment. To accomplish this purpose, a biologist need to define a sequence of bio applications as workflow easily because generally bio applications should be combined and executed in some order. This bio workflow should be executed in the form of distributed and parallel computing by allocating computing resources efficiently under HPC cluster system. Through this kind of job, we can expect better performance and fast response time of very large bio data analysis. This paper proposes a workflow-based data analysis system specialized for bio applications. Using this system, non-IT scientists and researchers can analyze very large bio data easily under HPC environment.

Design and implementation of a Shared-Concurrent File System in distributed UNIX environment (분산 UNIX 환경에서 Shared-Concurrent File System의 설계 및 구현)

  • Jang, Si-Ung;Jeong, Gi-Dong
    • The Transactions of the Korea Information Processing Society
    • /
    • v.3 no.3
    • /
    • pp.617-630
    • /
    • 1996
  • In this paper, a shared-concurrent file system (S-CFS) is designed and implemented using conventional disks as disk arrays on a Workstation Cluster which can be used as a small-scale server. Since it is implemented on UNIX operating systems, S_CFS is not only portable and flexible but also efficient in resource usage because it does not require additional I/O nodes. The result of the research shows that on small-scale systems with enough disks, the performance of the concurrent file system on transaction processing applications is bounded by the bottleneck of CPUs computing powers while the performance of the concurrent file system on massive data I/Os is bounded by the time required to copy data between buffers. The concurrent file system,which has been implemented on a Workstation Cluster with 8 disks,shows a throughput of 388 tps in case of transaction processing applications and can provide the bandwidth of 15.8 Mbytes/sec in case of massive data processing applications. Moreover,the concurrent file system has been dsigned to enhance the throughput of applications requirring high performance I/O by controlling the paralleism of the concurrent file system on user's side.

  • PDF

Android-Based Open Platform Intelligent Vehicle Services Middleware Application (안드로이드 기반의 지능형자동차 미들웨어 오픈플랫폼 서비스 응용)

  • Choi, Byung-Kwan
    • Journal of the Korea Society of Computer and Information
    • /
    • v.18 no.8
    • /
    • pp.33-41
    • /
    • 2013
  • Intelligent automobile technology and IT convergence, the development of new imaging technology media applications based on open source Android installed on tracked, wheeled smart phone application technology and the development of intelligent vehicles as a new paradigm a lot of research and development being made. Android-based intelligent automotive applications, technology, and evolved into the center of a set of various multimedia technologies move beyond the limits of the means of each of multimedia platforms, services and applications that have been developed in such a distributed environment, has been developed according to a variety of services through technology mobile terminal device technology is an absolute requirement. In this paper, SVC Codec, real-time video and graphics processing and SoC design intelligent vehicles middleware applications with monolithic system specification through Android-based design of intelligent vehicles dedicated middleware research experiments on open platforms, and provides various terminal services functions SoC based on a newly designed and standardized interface analysis techniques in this study were verified through experiments.