• Title/Summary/Keyword: decentralized data processing

Search Result 42, Processing Time 0.026 seconds

Internet of Drone: Identity Management using Hyperledger Fabric Platforms

  • Etienne, Igugu Tshisekedi;Kang, Sung-Won;Rhee, Kyung-hyune
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2022.05a
    • /
    • pp.204-207
    • /
    • 2022
  • The uses of drones are increasing despite the fact that many of us are still skeptical. In the near future, the data that will be created and used by them will be very voluminous, hence the need to find an architecture that allows good identity management and access control in a decentralized way while guaranteeing security and privacy. In this article, we propose an architecture using hyperledger fabric blockchain platform which will manage the identity in a secure way starting with the registration of the drones on the network then an access control thanks to Public Key Infrastructure (PKI) and membership service provider (MSP) to enable decision-making within the system.

A Study on a Distributed Data Fabric-based Platform in a Multi-Cloud Environment

  • Moon, Seok-Jae;Kang, Seong-Beom;Park, Byung-Joon
    • International Journal of Advanced Culture Technology
    • /
    • v.9 no.3
    • /
    • pp.321-326
    • /
    • 2021
  • In a multi-cloud environment, it is necessary to minimize physical movement for efficient interoperability of distributed source data without building a data warehouse or data lake. And there is a need for a data platform that can easily access data anywhere in a multi-cloud environment. In this paper, we propose a new platform based on data fabric centered on a distributed platform suitable for cloud environments that overcomes the limitations of legacy systems. This platform applies the knowledge graph database technique to the physical linkage of source data for interoperability of distributed data. And by integrating all data into one scalable platform in a multi-cloud environment, it uses the holochain technique so that companies can easily access and move data with security and authority guaranteed regardless of where the data is stored. The knowledge graph database mitigates the problem of heterogeneous conflicts of data interoperability in a decentralized environment, and Holochain accelerates the memory and security processing process on traditional blockchains. In this way, data access and sharing of more distributed data interoperability becomes flexible, and metadata matching flexibility is effectively handled.

A Study on Location-Based Services Based on Semantic Web

  • Kim, Jong-Woo;Kim, Ju-Yeon;Kim, Chang-Soo
    • Journal of Korea Multimedia Society
    • /
    • v.10 no.12
    • /
    • pp.1752-1761
    • /
    • 2007
  • Location-based services are a recent concept that integrates a mobile device's location with other information in order to provide added value to a user. Although Location-based Services provide users with comfortable information, it is a complex task to manage and share heterogeneous and numerous data in decentralized environments. In this paper, we propose the Semantic LBS Model as one of the solution to resolve the problem. The Semantic LBS Model is a LBS middleware model that includes an ontology-based data model for LBS POI information and its processing mechanism based on Semantic Web technologies. Our model enables POI information to be described and retrieved over various domain-specific ontologies based on our proposed POIDL ontology. This mechanism provide rich expressiveness, interoperability, flexibility in describing and using information about POls, and it can enhance POI retrieval services.

  • PDF

Flight trajectory generation through post-processing of launch vehicle tracking data (발사체 추적자료 후처리를 통한 비행궤적 생성)

  • Yun, Sek-Young;Lyou, Joon
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.19 no.6
    • /
    • pp.53-61
    • /
    • 2014
  • For monitoring the flight trajectory and the status of a launch vehicle, the mission control system in NARO space center process data acquired from the ground tracking system, which consists of two tracking radars, four telemetry stations, and one electro-optical tracking system. Each tracking unit exhibits its own tracking error mainly due to multi-path, clutter and radio refraction, and by utilizing only one among transmitted informations, it is not possible to determine the actual vehicle trajectory. This paper presents a way of generating flight trajectory via post-processing the data received from the ground tracking system. The post-processing algorithm is divided into two parts: compensation for atmosphere radio refraction and multi-sensor fusion, for which a decentralized Kalman filter was adopted and implemented based on constant acceleration model. Applications of the present scheme to real data resulted in the flight trajectory where the tracking errors were minimized than done by any one sensor.

Data Resource Management under Distributed Computing Environment (분산 컴퓨팅 환경하에서의 데이타 자원 관리)

  • 조희경;안중호
    • Proceedings of the Korea Database Society Conference
    • /
    • 1994.09a
    • /
    • pp.105-129
    • /
    • 1994
  • The information system of corporations are facing a new environment expressed by miniaturization, decentralization and Open System. It is therefore of utmost importance for corporations to adapt flexibly th such new environment by providing for corresponding changes to their existing information systems. The objectives of this study are to identify this new environment faced by today′s information system and develop effective methods for data resource management under this new environment. In this study, it is assumed that the new environment faced by information systems can be specified as Distributed Computing Environment, and in order to achieve such system, presents Client/server architecture as its representative computing structure, This study defines Client/server architecture as a computing architecture which specialize the fuctionality of the client system and the server system in order to have an application distribute and perform cooperative processing at the best platform. Furthermore, from among the five structures utilized in Client/server architecture for distribution and cooperative processing of application between server and client this study presents two different data management methods under the Client/server environment; one is "Remote Data Management Method" which uses file server or database server and. the other is "Distributed Data Management Method" using distributed database management system. The result of this study leads to the conclusion that in the client/server environment although distributed application is assumed, the data could become centralized (in the case of file server or database server) or decentralized (in the case of distributed database system) and the data management method through a distributed database system where complete responsibility and powers with respect to control of data used by the user are given not only is it more adaptable to modern flexible corporate environment, but in terms of system operation, it presents a more efficient data management alternative compared to existing data management methods in terms of cutting costs.

  • PDF

The File Splitting Distribution Scheme Using the P2P Networks with The Mesh topology (그물망 위상의 P2P 네트워크를 활용한 파일 분리 분산 방안)

  • Lee Myoung-Hoon;Park Jung-Su;Kim Jin-Hong;Jo In-June
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.9 no.8
    • /
    • pp.1669-1675
    • /
    • 2005
  • Recently, the small sized wireless terminals have problems of processing of large sized file because of the trends of a small sized terminals and a large sized files. Moreover, the web servers or the file servers have problems of the overload because of the concentration with many number of files to the them. Also, There is a security vulnerability of the data processing caused by the processing with a unit of the independent file. To resolve the problems, this paper proposes a new scheme of fat splining distribution using the P2P networks with the mesh topology. The proposed scheme is to distribute blocks of file into any peer of P2P networks. It can do that the small sized wireless terminals can process the large size file, the overload problems of a web or file servers can solve because of the decentralized files, and, the security vulnerability of the data processing is mitigated because of the distributed processing with a unit of the blocks to the peers.

A Study on DID-based Vehicle Component Data Collection Model for EV Life Cycle Assessment (전기차 전과정평가를 위한 DID 기반 차량부품 데이터수집 모델 연구)

  • Jun-Woo Kwon;Soojin Lee;Jane Kim;Seung-Hyun Seo
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.12 no.10
    • /
    • pp.309-318
    • /
    • 2023
  • Recently, each country has been moving to introduce an LCA (Life Cycle Assessment) to regulate greenhouse gas emissions. The LCA is a mean of measuring and evaluating greenhouse gas emissions generated over the entire life cycle of a vehicle. Reliable data for each electric vehicle component is needed to increase the reliability of the LCA results. To this end, studies on life cycle evaluation models using blockchain technology have been conducted. However, in the existing model, key product information is exposed to other participants. And each time parts data information is updated, it must be recorded in the blockchain ledger in the form of a transaction, which is inefficient. In this paper, we proposed a DID(Decentralized Identity)-based data collection model for LCA to collect vehicle component data and verify its validity effectively. The proposed model increases the reliability of the LCA by ensuring the validity and integrity of the collected data and verifying the source of the data. The proposed model guarantees the validity and integrity of collected data. As only user authentication information is shared on the blockchain ledger, the model prevents indiscriminate exposure of data and efficiently verifies and updates the source of data.

A HELPDESK system design for communication network service (데이터 통신서비스를 위한 EJB기반 통합 HELPDESK 설계에 관한 연구)

  • 조동권
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.6 no.5
    • /
    • pp.661-666
    • /
    • 2002
  • We need the flexible method for communication network configuration and fault management business procedure. Therefore development of systematic integrating fault management system is essential to meet on these requests. We must design the integrating fault management system so that can run the repair processing for both data communication network management and new next generation data communication network of various type. In general it is effective that the system is consisted of decentralized module to be accessibele for business logic and datum to remote area. To Solve these problem, a method is to use object-oriented design technique. That is, it is to abstract reusability objects and make component module using the abstracted objects. In this paper, we propose a fault management system of communication network service using object-oriented design techniques which are UML(Unified Modeling Language) and EJB(Enterprise Java Bean).

Block-chain based Secure Data Access over Internet of Health Application Things (IHoT)

  • A. Ezil Sam, Leni;R. Shankar;R. Thiagarajan;Vishal Ratansing Patil
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.17 no.5
    • /
    • pp.1484-1502
    • /
    • 2023
  • The medical sector actively changes and implements innovative features in response to technical development and revolutions. Many of the most crucial elements in IoT-connected health services are safeguarding critical patient records from prospective attackers. As a result, BlockChain (BC) is gaining traction in the business sector owing to its large implementations. As a result, BC can efficiently handle everyday life activities as a distributed and decentralized technology. Compared to other industries, the medical sector is one of the most prominent areas where the BC network might be valuable. It generates a wide range of possibilities and probabilities in existing medical institutions. So, throughout this study, we address BC technology's widespread application and influence in modern medical systems, focusing on the critical requirements for such systems, such as trustworthiness, security, and safety. Furthermore, we built the shared ledger for blockchain-based healthcare providers for patient information, contractual between several other parties. The study's findings demonstrate the usefulness of BC technology in IoHT for keeping patient health data. The BDSA-IoHT eliminates 2.01 seconds of service delay and 1.9 seconds of processing time, enhancing efficiency by nearly 30%.

The software architecture for the internal data processing in Gigabit IP Router (기가비트 라우터 시스템에서의 내부 데이터 처리를 위한 소프트웨어 구조)

  • Lee, Wang-Bong;Chung, Young-Sik;Kim, Tae-Il;Bang, Young-Cheol
    • The KIPS Transactions:PartC
    • /
    • v.10C no.1
    • /
    • pp.71-76
    • /
    • 2003
  • Internet traffic is getting tremendously heavier due to the exponential growth of the Internet users, the spread of the E-commerce and the network games. High-speed routers for fast packet forwarding are commercially available to satisfy the growing bandwidth. A high-speed router, which has the decentralized multiprocessing architecture for IP and routing functions, consists of host processors, line interfaces and switch fabrics. In this paper, we propose a software architecture tuned for high-speed non-forwarding packet manipulation. IPCMP (Inter-Processor Communication Message Protocol), which is a mechanism for IPC (Inter-Processor Communication), is also proposed and implemented as well. Proposed IPC mechanism results in faster packet-processing rate by 10% as compared to the conventional IPC mechanism using UDP/IP.