• Title/Summary/Keyword: one-to-one computing

Search Result 2,201, Processing Time 0.031 seconds

Traditional Culture's Cloud Digital Archive by approach to the Convergence (융복합 접근을 통한 고전문학 클라우드 디지털 아카이브)

  • Kim, Dong-Gun;Jeong, Hwa-Young
    • Journal of Advanced Navigation Technology
    • /
    • v.16 no.1
    • /
    • pp.116-121
    • /
    • 2012
  • This research aimed an application method to connect traditional culture and information technology that is one of the digital convergence area. For this purpose, we proposed cloud digital archive framework for traditional culture. Actually, cloud computing environment is one of new technology trend and would be a important keyword for information technology. This research focus on functional framework that is able to support service in cloud computing environment to connect pansori archive model, one of traditional culture. We also make a data structure and metadata by each characteristics, and insert a service framework to support pansori archive. This structure consists of data evaluation, preservation, and access process. This archive has a relation model with digital archive and metadata.

Fuzzy Inference of Large Volumes in Parallel Computing Environment (병렬컴퓨팅 환경에서의 대용량 퍼지 추론)

  • 김진일;박찬량;이동철;이상구
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2000.05a
    • /
    • pp.13-16
    • /
    • 2000
  • In fuzzy expert systems or database systems that have huge volumes of fuzzy data or large fuzzy rules, the inference time is much increased. Therefore, a high performance parallel fuzzy computing environment is needed. In this paper, we propose a parallel fuzzy inference mechanism in parallel computing environment. In this, fuzzy rules are distributed and executed simultaneously. The ONE_TO_ALL algorithm is used to broadcast the fuzzy input vector to the all nodes. The results of the MIN/MAX operations are transferred to the output processor by the ALL_TO_ONE algorithm. By parallel processing of fuzzy rules or data, the parallel fuzzy inference algorithm extracts effective parallel ism and achieves a good speed factor.

  • PDF

Task Scheduling on Cloudlet in Mobile Cloud Computing with Load Balancing

  • Poonam;Suman Sangwan
    • International Journal of Computer Science & Network Security
    • /
    • v.23 no.10
    • /
    • pp.73-80
    • /
    • 2023
  • The recent growth in the use of mobile devices has contributed to increased computing and storage requirements. Cloud computing has been used over the past decade to cater to computational and storage needs over the internet. However, the use of various mobile applications like Augmented Reality (AR), M2M Communications, V2X Communications, and the Internet of Things (IoT) led to the emergence of mobile cloud computing (MCC). All data from mobile devices is offloaded and computed on the cloud, removing all limitations incorporated with mobile devices. However, delays induced by the location of data centers led to the birth of edge computing technologies. In this paper, we discuss one of the edge computing technologies, i.e., cloudlet. Cloudlet brings the cloud close to the end-user leading to reduced delay and response time. An algorithm is proposed for scheduling tasks on cloudlet by considering VM's load. Simulation results indicate that the proposed algorithm provides 12% and 29% improvement over EMACS and QRR while balancing the load.

Source to teminal reliability evaluation by network decomposition (분할에 의한 네트워크의 국간신뢰도 계산)

  • 서희종;최종수
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.21 no.2
    • /
    • pp.375-382
    • /
    • 1996
  • In this paper, aneffective method for computing source to terminal reliability of network by decomposition is described. A graph is modeled after a network, and decomposed into two subgraphs. A logic product term of one subgraph is computed, and a graph of the other subgraphs is made according to the event representing the logic product term, and it's logic product term is compted. By multiplying the logic product term of one subgraph by that of the other subgraphs, a method for computing the source to terminal reliability is proposed. the time complexity for computing all the logic product terms of one subgraph is the product of copies of the number of edges in the subgraph of 2, and that of the other subgraph is the number of edges multiplied by the number of logic product terms. This method requires less computation time than that not by decomposition.

  • PDF

A Novel Soft Computing Technique for the Shortcoming of the Polynomial Neural Network

  • Kim, Dongwon;Huh, Sung-Hoe;Seo, Sam-Jun;Park, Gwi-Tae
    • International Journal of Control, Automation, and Systems
    • /
    • v.2 no.2
    • /
    • pp.189-200
    • /
    • 2004
  • In this paper, we introduce a new soft computing technique that dwells on the ideas of combining fuzzy rules in a fuzzy system with polynomial neural networks (PNN). The PNN is a flexible neural architecture whose structure is developed through the modeling process. Unfortunately, the PNN has a fatal drawback in that it cannot be constructed for nonlinear systems with only a small amount of input variables. To overcome this limitation in the conventional PNN, we employed one of three principal soft computing components such as a fuzzy system. As such, a space of input variables is partitioned into several subspaces by the fuzzy system and these subspaces are utilized as new input variables to the PNN architecture. The proposed soft computing technique is achieved by merging the fuzzy system and the PNN into one unified framework. As a result, we can find a workable synergistic environment and the main characteristics of the two modeling techniques are harmonized. Thus, the proposed method alleviates the problems of PNN while providing superb performance. Identification results of the three-input nonlinear static function and nonlinear system with two inputs will be demonstrated to demonstrate the performance of the proposed approach.

Re-Ordering of Users in the Group Key Generation Tree Protocol (사용자 순서 재조정을 통한 그룹 키 생성 트리 프로토콜)

  • Hong, Sung-Hyuck
    • Journal of Digital Convergence
    • /
    • v.10 no.6
    • /
    • pp.247-251
    • /
    • 2012
  • Tree-based Group Diffie-Hellman (TGDH) is one of the efficient group key agreement protocols to generate the GK. TGDH assumes all members have an equal computing power. As one of the characteristics of distributed computing is heterogeneity, the member can be at a workstation, a laptop or even a mobile computer. Therefore, the group member sequence should be reordered in terms of the member's computing power to improve performance. This research proposes a reordering of members in the group key generation tree to enhance the efficiency of the group key generation.

Analysis of Worldwide Review on Cloud Computing (클라우드 컴퓨팅에 대한 국내외 동향 분석)

  • Leem, Young-Moon;Hwang, Young-Seob
    • Proceedings of the Safety Management and Science Conference
    • /
    • 2009.11a
    • /
    • pp.411-415
    • /
    • 2009
  • There are some definitions of cloud computing but it can be defined as utilization of personal computers, servers and softwares in one cluster for approaches to the use of shared computing resources. Nowadays the applications of cloud computing are rapidly increasing because of its merits on economic aspect, connectivity convenience, storage space and so on. The main objective of this paper is to find an effective methodology as an initial stage for applications of cloud computing in a real life. Therefore this paper addresses worldwide reviews on cloud computing.

  • PDF

Design of Distributed Processing Framework Based on H-RTGL One-class Classifier for Big Data (빅데이터를 위한 H-RTGL 기반 단일 분류기 분산 처리 프레임워크 설계)

  • Kim, Do Gyun;Choi, Jin Young
    • Journal of Korean Society for Quality Management
    • /
    • v.48 no.4
    • /
    • pp.553-566
    • /
    • 2020
  • Purpose: The purpose of this study was to design a framework for generating one-class classification algorithm based on Hyper-Rectangle(H-RTGL) in a distributed environment connected by network. Methods: At first, we devised one-class classifier based on H-RTGL which can be performed by distributed computing nodes considering model and data parallelism. Then, we also designed facilitating components for execution of distributed processing. In the end, we validate both effectiveness and efficiency of the classifier obtained from the proposed framework by a numerical experiment using data set obtained from UCI machine learning repository. Results: We designed distributed processing framework capable of one-class classification based on H-RTGL in distributed environment consisting of physically separated computing nodes. It includes components for implementation of model and data parallelism, which enables distributed generation of classifier. From a numerical experiment, we could observe that there was no significant change of classification performance assessed by statistical test and elapsed time was reduced due to application of distributed processing in dataset with considerable size. Conclusion: Based on such result, we can conclude that application of distributed processing for generating classifier can preserve classification performance and it can improve the efficiency of classification algorithms. In addition, we suggested an idea for future research directions of this paper as well as limitation of our work.

Task Distribution Scheme based on Service Requirements Considering Opportunistic Fog Computing Nodes in Fog Computing Environments (포그 컴퓨팅 환경에서 기회적 포그 컴퓨팅 노드들을 고려한 서비스 요구사항 기반 테스크 분배 방법)

  • Kyung, Yeunwoong
    • Journal of Korea Multimedia Society
    • /
    • v.24 no.1
    • /
    • pp.51-57
    • /
    • 2021
  • In this paper, we propose a task distribution scheme in fog computing environment considering opportunistic fog computing nodes. As latency is one of the important performance metric for IoT(Internet of Things) applications, there have been lots of researches on the fog computing system. However, since the load can be concentrated to the specific fog computing nodes due to the spatial and temporal IoT characteristics, the load distribution should be considered to prevent the performance degradation. Therefore, this paper proposes a task distribution scheme which considers the static as well as opportunistic fog computing nodes according to their mobility feature. Especially, based on the task requirements, the proposed scheme supports the delay sensitive task processing at the static fog node and delay in-sensitive tasks by means of the opportunistic fog nodes for the task distribution. Based on the performance evaluation, the proposed scheme shows low service response time compared to the conventional schemes.

A Secure Location-Based Service Reservation Protocol in Pervasive Computing Environment

  • Konidala M. Divyan;Kim, Kwangjo
    • Proceedings of the Korea Institutes of Information Security and Cryptology Conference
    • /
    • 2003.12a
    • /
    • pp.669-685
    • /
    • 2003
  • Nowadays mobile phones and PDAs are part and parcel of our lives. By carrying a portable mobile device with us all the time we are already living in partial Pervasive Computing Environment (PCE) that is waiting to be exploited very soon. One of the advantages of pervasive computing is that it strongly supports the deployment of Location-Based Service(s) (LBSs). In PCE, there would be many competitive service providers (SPs) trying to sell different or similar LBSs to users. In order to reserve a particular service, it becomes very difficult for a low-computing and resource-poor mobile device to handle many such SPs at a time, and to identify and securely communicate with only genuine ones. Our paper establishes a convincing trust model through which secure job delegation is accomplished. Secure Job delegation and cost effective cryptographic techniques largely help in reducing the burden on the mobile device to securely communicate with trusted SPs. Our protocol also provides users privacy protection, replay protection, entity authentication, and message authentication, integrity, and confidentiality. This paper explains our protocol by suggesting one of the LBSs namely“Secure Automated Taxi Calling Service”.

  • PDF