• Title/Summary/Keyword: distributed applications

Search Result 1,258, Processing Time 0.034 seconds

UniPy: A Unified Programming Language for MGC-based IoT Systems

  • Kim, Gayoung;Choi, Kwanghoon;Chang, Byeong-Mo
    • Journal of the Korea Society of Computer and Information
    • /
    • v.24 no.3
    • /
    • pp.77-86
    • /
    • 2019
  • The advent of Internet of Things (IoT) makes common nowadays computing environments involving programming not a single computer but several heterogeneous distributed computers together. Developing programs separately, one for each computer, increases programmer burden and testing all the programs become more complex. To address the challenge, this paper proposes an RPC-based unified programming language, UniPy, for development of MGC (eMbedded, Gateway, and Cloud) applications in IoT systems configured with popular computers such as Arduino, Raspberry Pi, and Web-based DB server. UniPy offers programmers a view of classes as locations and a very simple form of remote procedure call mechanism. Our UniPy compiler automatically splits a UniPy program into small pieces of the program at different locations supporting the necessary RPC mechanism. An advantage of UniPy programs is to permit programmers to write local codes the same as for a single computer requiring no extra knowledge due to having unified programming models, which is very different from the existing research works such as Fabryq and Ravel. Also, the structure of UniPy programs allows programmers to test them by directly executing them before splitting, which is a feature that has never been emphasized yet.

Adaptive Success Rate-based Sensor Relocation for IoT Applications

  • Kim, Moonseong;Lee, Woochan
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.9
    • /
    • pp.3120-3137
    • /
    • 2021
  • Small-sized IoT wireless sensing devices can be deployed with small aircraft such as drones, and the deployment of mobile IoT devices can be relocated to suit data collection with efficient relocation algorithms. However, the terrain may not be able to predict its shape. Mobile IoT devices suitable for these terrains are hopping devices that can move with jumps. So far, most hopping sensor relocation studies have made the unrealistic assumption that all hopping devices know the overall state of the entire network and each device's current state. Recent work has proposed the most realistic distributed network environment-based relocation algorithms that do not require sharing all information simultaneously. However, since the shortest path-based algorithm performs communication and movement requests with terminals, it is not suitable for an area where the distribution of obstacles is uneven. The proposed scheme applies a simple Monte Carlo method based on relay nodes selection random variables that reflect the obstacle distribution's characteristics to choose the best relay node as reinforcement learning, not specific relay nodes. Using the relay node selection random variable could significantly reduce the generation of additional messages that occur to select the shortest path. This paper's additional contribution is that the world's first distributed environment-based relocation protocol is proposed reflecting real-world physical devices' characteristics through the OMNeT++ simulator. We also reconstruct the three days-long disaster environment, and performance evaluation has been performed by applying the proposed protocol to the simulated real-world environment.

Development of Big-data Management Platform Considering Docker Based Real Time Data Connecting and Processing Environments (도커 기반의 실시간 데이터 연계 및 처리 환경을 고려한 빅데이터 관리 플랫폼 개발)

  • Kim, Dong Gil;Park, Yong-Soon;Chung, Tae-Yun
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.16 no.4
    • /
    • pp.153-161
    • /
    • 2021
  • Real-time access is required to handle continuous and unstructured data and should be flexible in management under dynamic state. Platform can be built to allow data collection, storage, and processing from local-server or multi-server. Although the former centralize method is easy to control, it creates an overload problem because it proceeds all the processing in one unit, and the latter distributed method performs parallel processing, so it is fast to respond and can easily scale system capacity, but the design is complex. This paper provides data collection and processing on one platform to derive significant insights from various data held by an enterprise or agency in the latter manner, which is intuitively available on dashboards and utilizes Spark to improve distributed processing performance. All service utilize dockers to distribute and management. The data used in this study was 100% collected from Kafka, showing that when the file size is 4.4 gigabytes, the data processing speed in spark cluster mode is 2 minute 15 seconds, about 3 minutes 19 seconds faster than the local mode.

Wellness Prediction in Diabetes Mellitus Risks Via Machine Learning Classifiers

  • Saravanakumar M, Venkatesh;Sabibullah, M.
    • International Journal of Computer Science & Network Security
    • /
    • v.22 no.4
    • /
    • pp.203-208
    • /
    • 2022
  • The occurrence of Type 2 Diabetes Mellitus (T2DM) is hoarding globally. All kinds of Diabetes Mellitus is controlled to disrupt over 415 million grownups worldwide. It was the seventh prime cause of demise widespread with a measured 1.6 million deaths right prompted by diabetes during 2016. Over 90% of diabetes cases are T2DM, with the utmost persons having at smallest one other chronic condition in UK. In valuation of contemporary applications of Big Data (BD) to Diabetes Medicare by sighted its upcoming abilities, it is compulsory to transmit out a bottomless revision over foremost theoretical literatures. The long-term growth in medicine and, in explicit, in the field of "Diabetology", is powerfully encroached to a sequence of differences and inventions. The medical and healthcare data from varied bases like analysis and treatment tactics which assistances healthcare workers to guess the actual perceptions about the development of Diabetes Medicare measures accessible by them. Apache Spark extracts "Resilient Distributed Dataset (RDD)", a vital data structure distributed finished a cluster on machines. Machine Learning (ML) deals a note-worthy method for building elegant and automatic algorithms. ML library involving of communal ML algorithms like Support Vector Classification and Random Forest are investigated in this projected work by using Jupiter Notebook - Python code, where significant quantity of result (Accuracy) is carried out by the models.

Interpretation and Statistical Analysis of Ethereum Node Discovery Protocol (이더리움 노드 탐색 프로토콜 해석 및 통계 분석)

  • Kim, Jungyeon;Ju, Hongteak
    • KNOM Review
    • /
    • v.24 no.2
    • /
    • pp.48-55
    • /
    • 2021
  • Ethereum is an open software platform based on blockchain technology that enables the construction and distribution of distributed applications. Ethereum uses a fully distributed connection method in which all participating nodes participate in the network with equal authority and rights. Ethereum networks use Kademlia-based node discovery protocols to retrieve and store node information. Ethereum is striving to stabilize the entire network topology by implementing node discovery protocols, but systems for monitoring are insufficient. This paper develops a WireShark dissector that can receive packet information in the Ethereum node discovery process and provides network packet measurement results. It can be used as basic data for the research on network performance improvement and vulnerability by analyzing the Ethereum node discovery process.

Gaussian noise addition approaches for ensemble optimal interpolation implementation in a distributed hydrological model

  • Manoj Khaniya;Yasuto Tachikawa;Kodai Yamamoto;Takahiro Sayama;Sunmin Kim
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2023.05a
    • /
    • pp.25-25
    • /
    • 2023
  • The ensemble optimal interpolation (EnOI) scheme is a sub-optimal alternative to the ensemble Kalman filter (EnKF) with a reduced computational demand making it potentially more suitable for operational applications. Since only one model is integrated forward instead of an ensemble of model realizations, online estimation of the background error covariance matrix is not possible in the EnOI scheme. In this study, we investigate two Gaussian noise based ensemble generation strategies to produce dynamic covariance matrices for assimilation of water level observations into a distributed hydrological model. In the first approach, spatially correlated noise, sampled from a normal distribution with a fixed fractional error parameter (which controls its standard deviation), is added to the model forecast state vector to prepare the ensembles. In the second method, we use an adaptive error estimation technique based on the innovation diagnostics to estimate this error parameter within the assimilation framework. The results from a real and a set of synthetic experiments indicate that the EnOI scheme can provide better results when an optimal EnKF is not identified, but performs worse than the ensemble filter when the true error characteristics are known. Furthermore, while the adaptive approach is able to reduce the sensitivity to the fractional error parameter affecting the first (non-adaptive) approach, results are usually worse at ungauged locations with the former.

  • PDF

Approach towards qualification of TCP/IP network components of PFBR

  • Aditya Gour;Tom Mathews;R.P. Behera
    • Nuclear Engineering and Technology
    • /
    • v.54 no.11
    • /
    • pp.3975-3984
    • /
    • 2022
  • Distributed control system architecture is adopted for I&C systems of Prototype Fast Breeder Reactor, where the geographically distributed control systems are connected to centralized servers & display stations via switched Ethernet networks. TCP/IP communication plays a significant role in the successful operations of this architecture. The communication tasks at control nodes are taken care by TCP/IP offload modules; local area switched network is realized using layer-2/3 switches, which are finally connected to network interfaces of centralized servers & display stations. Safety, security, reliability, and fault tolerance of control systems used for safety-related applications of nuclear power plants is ensured by indigenous design and qualification as per guidelines laid down by regulatory authorities. In the case of commercially available components, appropriate suitability analysis is required for getting the operation clearances from regulatory authorities. This paper details the proposed approach for the suitability analysis of TCP/IP communication nodes, including control systems at the field, network switches, and servers/display stations. Development of test platform using commercially available tools and diagnostics software engineered for control nodes/display stations are described. Each TCP link behavior with impaired packets and multiple traffic loads is described, followed by benchmarking of the network switch's routing characteristics and security features.

Robust and Auditable Secure Data Access Control in Clouds

  • KARPAGADEEPA.S;VIJAYAKUMAR.P
    • International Journal of Computer Science & Network Security
    • /
    • v.24 no.5
    • /
    • pp.95-102
    • /
    • 2024
  • In distributed computing, accessible encryption strategy over Auditable data is a hot research field. Be that as it may, most existing system on encoded look and auditable over outsourced cloud information and disregard customized seek goal. Distributed storage space get to manage is imperative for the security of given information, where information security is executed just for the encoded content. It is a smaller amount secure in light of the fact that the Intruder has been endeavored to separate the scrambled records or Information. To determine this issue we have actualize (CBC) figure piece fastening. It is tied in with adding XOR each plaintext piece to the figure content square that was already delivered. We propose a novel heterogeneous structure to evaluate the issue of single-point execution bottleneck and give a more proficient access control plot with a reviewing component. In the interim, in our plan, a CA (Central Authority) is acquainted with create mystery keys for authenticity confirmed clients. Not at all like other multi specialist get to control plots, each of the experts in our plan deals with the entire trait set independently. Keywords: Cloud storage, Access control, Auditing, CBC.

Extending a WebDAV Protocol to Efficiently Support the Management of User Properties (사용자 속성 관리의 효율적 지원을 위한 WebDAV 프로토콜의 확장)

  • Jung Hye-Young;Kim Dong-Ho;Ahn Geon-Tae;Lee Myung-Joon
    • The KIPS Transactions:PartC
    • /
    • v.12C no.7 s.103
    • /
    • pp.1057-1066
    • /
    • 2005
  • WebDAV(Web-based Distributed Authoring and Versioning), a protocol which supports web-based distributed authoring and versioning, provides a standard infrastructure for asynchronous collaboration on various contents through the Internet. A WebDAV property management is a function to set and manage the main information of the resources as properties, and a user property, one kind of the WebDAV properties, has the ability to be freely defined by users. This free definition of user property makes it very useful to develop web-based applications like a collaboration system based on WebDAV However, with an existing WebDAV property management scheme, there is a limit to develop various applications. This paper describes a DavUP(WebDAV User property design Protocol) protocol which extended the original WebDAV and its uti-lization which efficiently supports management of WebDAV user properties. DavUP needs the definition of the collection structure and type definition properties for an application. To do this, we added a new header md appropriated WebDAV method functions to the WebDAV protocol. To show the usefulness of DavUP protocols, we extended our DAVinci WebDAV server to support DavUP Protocols and experimentally implemented a general Open Workspace, which provides effective functions to share and exchange open data among general users, on the DAVinci.

Case study on the Distributed Multi-stage Blasting using Stemming-Help Plastic Sheet and Programmable Sequential Blasting Machine (전색보호판과 다단발파기를 이용한 다단식분산발파의 현장 적용 사례)

  • Kim, Se-Won;Lim, Ick-Hwan;Kim, Jae-Sung
    • Explosives and Blasting
    • /
    • v.31 no.2
    • /
    • pp.14-24
    • /
    • 2013
  • The most effective way of the rock removing works in the downtown area is to removing rocks by splitting the rock by blasting with small amount of explosives in the hole. However environmental factors not only limit the applications but also increase the forbidden area. As this is a distributed multi-stage blasting method and to reduce vibration by applying the optimized precisioncontrol-blasting method, it is applicable in all situations. The process is to fix the stemming-help plastic sheet to the hole entrance when stemming explosives and insert detonator and explosive primer with same delay time, two or three sets. This method is more efficient in the downtown area where claims and dispute from vibration are expected. This method is easily usable by designing blast pattern even in the area where delay time blasting is difficult after multi-stage explosive stemming due to short length of blast hole (1.2~3.0m) and there is no detonator wire shortage or dead-pressure.