• Title/Summary/Keyword: 노드정보

Search Result 5,897, Processing Time 0.03 seconds

A Routing Algorithm based on Deep Reinforcement Learning in SDN (SDN에서 심층강화학습 기반 라우팅 알고리즘)

  • Lee, Sung-Keun
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.16 no.6
    • /
    • pp.1153-1160
    • /
    • 2021
  • This paper proposes a routing algorithm that determines the optimal path using deep reinforcement learning in software-defined networks. The deep reinforcement learning model for learning is based on DQN, the inputs are the current network state, source, and destination nodes, and the output returns a list of routes from source to destination. The routing task is defined as a discrete control problem, and the quality of service parameters for routing consider delay, bandwidth, and loss rate. The routing agent classifies the appropriate service class according to the user's quality of service profile, and converts the service class that can be provided for each link from the current network state collected from the SDN. Based on this converted information, it learns to select a route that satisfies the required service level from the source to the destination. The simulation results indicated that if the proposed algorithm proceeds with a certain episode, the correct path is selected and the learning is successfully performed.

The study of blood glucose level prediction using photoplethysmography and machine learning (PPG와 기계학습을 활용한 혈당수치 예측 연구)

  • Cheol-Gu, Park;Sang-Ki, Choi
    • Journal of Digital Policy
    • /
    • v.1 no.2
    • /
    • pp.61-69
    • /
    • 2022
  • The paper is a study to develop and verify a blood glucose level prediction model based on biosignals obtained from photoplethysmography (PPG) sensors, ICT technology and data. Blood glucose prediction used the MLP architecture of machine learning. The input layer of the machine learning model consists of 10 input nodes and 5 hidden layers: heart rate, heart rate variability, age, gender, VLF, LF, HF, SDNN, RMSSD, and PNN50. The results of the predictive model are MSE=0.0724, MAE=1.1022 and RMSE=1.0285, and the coefficient of determination (R2) is 0.9985. A blood glucose prediction model using bio-signal data collected from digital devices and machine learning was established and verified. If research to standardize and increase accuracy of machine learning datasets for various digital devices continues, it could be an alternative method for individual blood glucose management.

RIDS: Random Forest-Based Intrusion Detection System for In-Vehicle Network (RIDS: 랜덤 포레스트 기반 차량 내 네트워크 칩입 탐지 시스템)

  • Daegi, Lee;Changseon, Han;Seongsoo, Lee
    • Journal of IKEEE
    • /
    • v.26 no.4
    • /
    • pp.614-621
    • /
    • 2022
  • This paper proposes RIDS (Random Forest-Based Intrusion Detection), which is an intrusion detection system to detect hacking attack based on random forest. RIDS detects three typical attacks i.e. DoS (Denial of service) attack, fuzzing attack, and spoofing attack. It detects hacking attack based on four parameters, i.e. time interval between data frames, its deviation, Hamming distance between payloads, and its diviation. RIDS was designed in memory-centric architecture and node information is stored in memories. It was designed in scalable architecture where DoS attack, fuzzing attack, and spoofing attack can be all detected by adjusting number and depth of trees. Simulation results show that RIDS has 0.9835 accuracy and 0.9545 F1 score and it can detect three attack types effectively.

Dynamic Subspace Clustering for Online Data Streams (온라인 데이터 스트림에서의 동적 부분 공간 클러스터링 기법)

  • Park, Nam Hun
    • Journal of Digital Convergence
    • /
    • v.20 no.2
    • /
    • pp.217-223
    • /
    • 2022
  • Subspace clustering for online data streams requires a large amount of memory resources as all subsets of data dimensions must be examined. In order to track the continuous change of clusters for a data stream in a finite memory space, in this paper, we propose a grid-based subspace clustering algorithm that effectively uses memory resources. Given an n-dimensional data stream, the distribution information of data items in data space is monitored by a grid-cell list. When the frequency of data items in the grid-cell list of the first level is high and it becomes a unit grid-cell, the grid-cell list of the next level is created as a child node in order to find clusters of all possible subspaces from the grid-cell. In this way, a maximum n-level grid-cell subspace tree is constructed, and a k-dimensional subspace cluster can be found at the kth level of the subspace grid-cell tree. Through experiments, it was confirmed that the proposed method uses computing resources more efficiently by expanding only the dense space while maintaining the same accuracy as the existing method.

Study on Distributed Ledger Technology using Thing-user Group Management of Network of Everything (만물네트워크의 사물유저 그룹 관리 기반의 분산원장 기술에 대한 연구)

  • Kim, Suyeon;Kahng, Hyun Kook
    • Journal of Software Assessment and Valuation
    • /
    • v.16 no.2
    • /
    • pp.77-85
    • /
    • 2020
  • In this paper, We studied the operation of distributed ledger technology used as a core technology for smart contracts and the components of distributed ledger technology. As a solution applying the entity of distributed ledger technology to NoE, we proposed the protocol of the distributed ledger technology using the thing user social group management function of NoE protocols being standardized in ISO/IEC JTC1 SC6. The management function of things user social group in NoE provides stable protocol functions and data transmission management, and provides group management functions such as member discovery function and data transmission channel management function. It is expected to be useful for member management functions of distributed ledger nodes by providing a service that apply the component of distributed ledger technology. We intend to actively reflect this technology in the future network functions of ISO/IEC JTC1 SC6, which is undergoing standardization.

High Resolution and Large Scale Flood Modeling using 2D Finite Volume Model (2차원 유한체적모형을 적용한 고해상도 대규모 유역 홍수모델링)

  • Kim, Byunghyun;Kim, Hyun Il;Han, Kun Yeun
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2020.06a
    • /
    • pp.413-413
    • /
    • 2020
  • Godunov형 모형을 이용한 홍수모델링에서는 일반적으로 구조적 사각격자나 비구조적 삼각격자가 주로 적용된다. 2차원 수치모형을 이용한 홍수모델링에서 연구유역의 정보가 격자의 노드나 중심에 입력되므로 적용격자의 유형과 생성방법에 따라 모형의 입력자료 오차에 영항을 줄 수 있다. 따라서, 연구유역이 지형 변동성이 심한 지역이거나 흐름형상이나 흐름변동이 심한 구간이라면, 고해상도 격자를 통해 모형의 입력자료 오차를 최소화할 할 수 있다. 본 연구에서는 2가지 유형에 대한 연구를 수행하였다, 첫 번째는 홍수해석을 위한 2차원 모형의 격자형상과 해상도에 따른 홍수위 및 홍수범람범위를 비교·분석하는 연구를 수행하였다. 연구유역은 2000년 10월 29일부터 11월 19일까지 홍수가 발생한 영국의 Severn 강 유역이다. 연구유역의 홍수 모델링을 위한 지형자료는 3m 해상도의 LiDAR(Light Detection And Ranging)를 이용하여 구축하였으며, 격자유형 및 해상도에 따른 2차원 홍수위 및 홍수범람범위를 비교·분석하기 위해서 홍수 발생기간 동안 촬영된 4개(2000년 8월 11, 14, 15, 17일)의 ASAR(Advanced Synthetic Aperture Radar) 영상자료를 활용하였다. 즉, ASAR 영상으로 촬용된 최대범람시기 및 홍수류의 배수기를 활용하여 최대범람범위뿐만 아니라 홍수가 증가하는 시기와 하류단 배수로 인해 홍수가 감소하는 시기를 모두 포함하는 홍수범람범위에 대한 격자유형별 2차원 홍수범람모형의 계산 결과에 대해 비교하였다. 두 번째는 아마존 강 중류유역의 2,500K㎡ 면적에 해당하는 대규모 유역에 대해 SRTM(Shuttle Radar Topography Mission) 지형자료를 이용하여 홍수기와 갈수기에 대해 2차원 모델링을 수행하고 그 결과를 위성자료와 비교하였다.

  • PDF

Analyses of Security Issues and Requirements Under Surroundings of Internet of Things (사물인터넷 환경하에서 보안 이슈 및 요구사항 분석)

  • Jung Tae Kim
    • The Journal of the Convergence on Culture Technology
    • /
    • v.9 no.4
    • /
    • pp.639-647
    • /
    • 2023
  • A variety of communications are developed and advanced by integration of wireless and wire connections with heterogeneous system. Traditional technologies are mainly focus on information technology based on computer techniques in the field of industry, manufacture and automation fields. As new technologies are developed and enhanced with traditional techniques, a lot of new applications are emerged and merged with existing mechanism and skills. The representative applications are IoT(Internet of Things) services and applications. IoT is breakthrough technologies and one of the innovation industries which are called 4 generation industry revolution. Due to limited resources in IoT such as small memory, low power and computing power, IoT devices are vulnerable and disclosed with security problems. In this paper, we reviewed and analyzed security challenges, threats and requirements under IoT service.

Stability Analysis of Multi-motor Controller based on Hierarchical Network (계층적 네트워크 기반 다중 모터 제어기의 안정도 분석)

  • Chanwoo Moon
    • The Journal of the Convergence on Culture Technology
    • /
    • v.9 no.3
    • /
    • pp.677-682
    • /
    • 2023
  • A large number of motors and sensors are used to drive a humanoid robot. In order to solve the wiring problem that occurs when connecting multiple actuators, a controller based on a communication network has been used, and CAN, which is advantageous in terms of cost and a highly reliable communication protocol, was mainly used. In terms of the structure of the controller, a torque control type structure that is easy to implement an advanced algorithm into the upper controller is preferred. In this case, the low communication bandwidth of CAN becomes a problem, and in order to obtain sufficient communication bandwidth, a communication network is configured by separating into a plurality of CAN networks. In this study, a stability analysis on transmission time delay is performed for a multi-motor control system in which high-speed FlexRay and low-speed CAN communication networks are hierarchically connected in order to obtain a high communication bandwidth, and sensor information and driving signals are delivered within the allowed transmission time. The proposed hierarchical network-based control system is expected to improve control performance because it can implement multiple motor control systems with a single network.

Research on Performance of Graph Algorithm using Deep Learning Technology (딥러닝 기술을 적용한 그래프 알고리즘 성능 연구)

  • Giseop Noh
    • The Journal of the Convergence on Culture Technology
    • /
    • v.10 no.1
    • /
    • pp.471-476
    • /
    • 2024
  • With the spread of various smart devices and computing devices, big data generation is occurring widely. Machine learning is an algorithm that performs reasoning by learning data patterns. Among the various machine learning algorithms, the algorithm that attracts attention is deep learning based on neural networks. Deep learning is achieving rapid performance improvement with the release of various applications. Recently, among deep learning algorithms, attempts to analyze data using graph structures are increasing. In this study, we present a graph generation method for transferring to a deep learning network. This paper proposes a method of generalizing node properties and edge weights in the graph generation process and converting them into a structure for deep learning input by presenting a matricization We present a method of applying a linear transformation matrix that can preserve attribute and weight information in the graph generation process. Finally, we present a deep learning input structure of a general graph and present an approach for performance analysis.

An Analysis into the Characteristics of the High-pass Transportation Data and Information Processing Measures on Urban Roads (도시부도로에서의 하이패스 교통자료 특성분석 및 정보가공방안)

  • Jung, Min-Chul;Kim, Young-Chan;Kim, Dong-Hyo
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.10 no.6
    • /
    • pp.74-83
    • /
    • 2011
  • The high-pass transportation information system directly collects section information by using probe cars and therefore can offer more reliable information to drivers. However, because the running condition and features of probe cars and statistical processing methods affect the reliability of the information and particularly because the section travel time is greatly influenced by whether there has been delay by signals on urban roads or not, there can be much deviation among the collected individual probe data. Accordingly, researches in multilateral directions are necessary in order to enhance the credibility of the section information. Yet, the precedent studies related to high-pass information provision have been conducted on the highway sections with the feature of continuous flow, which has a limit to be applied to the urban roads with the transportational feature of an interrupted flow. Therefore, this research aims at analyzing the features of high-pass transportation data on urban roads and finding a proper processing method. When the characteristics of the high-pass data on urban roads collected from RSE were analyzed by using a time-space diagram, the collected data was proved to have a certain pattern according to the arriving cars' waiting for signals with the period of the signaling cycle of the finish node. Moreover, the number of waiting for signals and the time of waiting caused the deviation in the collected data, and it was bigger in traffic jam. The analysis result showed that it was because the increased number of waiting for signals in traffic jam caused the deviation to be offset partially. The analysis result shows that it is appropriate to use the mean of this collected data of high-pass on urban roads as its representative value to reflect the transportational features by waiting for signals, and the standard of judgment of delay and congestion needs to be changed depending on the features of signals and roads. The results of this research are expected to be the foundation stone to improve the reliability of high-pass information on urban roads.